-- Logs begin at Sun 2023-01-22 19:08:57 UTC, end at Mon 2023-01-23 17:57:13 UTC. -- Jan 23 16:10:19 localhost kernel: microcode: microcode updated early to revision 0xd000363, date = 2022-03-30 Jan 23 16:10:19 localhost kernel: Linux version 4.18.0-372.40.1.el8_6.x86_64 (mockbuild@x86-vm-07.build.eng.bos.redhat.com) (gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC)) #1 SMP Tue Jan 3 09:45:26 EST 2023 Jan 23 16:10:19 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/vmlinuz-4.18.0-372.40.1.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/0 ip=dhcp root=UUID=b7d7393a-4ab5-4434-a099-e66267f4b07d rw rootflags=prjquota boot=UUID=6b5eaf26-520d-4e42-90f4-4869c15c705f Jan 23 16:10:19 localhost kernel: x86/split lock detection: disabled Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 16:10:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 16:10:19 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 16:10:19 localhost kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 16:10:19 localhost kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 16:10:19 localhost kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 16:10:19 localhost kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 16:10:19 localhost kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 16:10:19 localhost kernel: signal: max sigframe size: 3632 Jan 23 16:10:19 localhost kernel: BIOS-provided physical RAM map: Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000008efff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000000008f000-0x000000000008ffff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000000090000-0x000000000009ffff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x0000000000ffffff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000001000000-0x0000000004d91fff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000004d92000-0x0000000043d47fff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000043d48000-0x0000000049507fff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000049508000-0x000000004afd1fff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000004afd2000-0x000000004bfd1fff] ACPI NVS Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000004bfd2000-0x000000004c1d2fff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000004c1d3000-0x000000004c2d8fff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000004c2d9000-0x000000005eefdfff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000005eefe000-0x000000006e3fefff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000006e3ff000-0x000000006f3fefff] ACPI NVS Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000006f3ff000-0x000000006f7fefff] ACPI data Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000006f7ff000-0x000000006f7fffff] usable Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x000000006f800000-0x000000008fffffff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 23 16:10:19 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000407fffffff] usable Jan 23 16:10:19 localhost kernel: NX (Execute Disable) protection: active Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43d39020-0x43d4105f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43d39020-0x43d4105f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43d06020-0x43d3845f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43d06020-0x43d3845f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43cd3020-0x43d0545f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43cd3020-0x43d0545f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43ca0020-0x43cd225f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43ca0020-0x43cd225f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43c6d020-0x43c9f25f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43c6d020-0x43c9f25f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43c3d020-0x43c6c05f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43c3d020-0x43c6c05f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43b8c020-0x43c3c85f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43b8c020-0x43c3c85f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43adb020-0x43b8b85f] usable ==> usable Jan 23 16:10:19 localhost kernel: e820: update [mem 0x43adb020-0x43b8b85f] usable ==> usable Jan 23 16:10:19 localhost kernel: extended physical RAM map: Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000008efff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000000008f000-0x000000000008ffff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000000090000-0x000000000009ffff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000000ffffff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000001000000-0x0000000004d91fff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000004d92000-0x0000000043adb01f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043adb020-0x0000000043b8b85f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043b8b860-0x0000000043b8c01f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043b8c020-0x0000000043c3c85f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043c3c860-0x0000000043c3d01f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043c3d020-0x0000000043c6c05f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043c6c060-0x0000000043c6d01f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043c6d020-0x0000000043c9f25f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043c9f260-0x0000000043ca001f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043ca0020-0x0000000043cd225f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043cd2260-0x0000000043cd301f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043cd3020-0x0000000043d0545f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d05460-0x0000000043d0601f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d06020-0x0000000043d3845f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d38460-0x0000000043d3901f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d39020-0x0000000043d4105f] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d41060-0x0000000043d47fff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000043d48000-0x0000000049507fff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000049508000-0x000000004afd1fff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000004afd2000-0x000000004bfd1fff] ACPI NVS Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000004bfd2000-0x000000004c1d2fff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000004c1d3000-0x000000004c2d8fff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000004c2d9000-0x000000005eefdfff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000005eefe000-0x000000006e3fefff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000006e3ff000-0x000000006f3fefff] ACPI NVS Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000006f3ff000-0x000000006f7fefff] ACPI data Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000006f7ff000-0x000000006f7fffff] usable Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x000000006f800000-0x000000008fffffff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 23 16:10:19 localhost kernel: reserve setup_data: [mem 0x0000000100000000-0x000000407fffffff] usable Jan 23 16:10:19 localhost kernel: efi: EFI v2.70 by Dell Inc. Jan 23 16:10:19 localhost kernel: efi: ACPI=0x6f7fe000 ACPI 2.0=0x6f7fe014 MEMATTR=0x595a2020 SMBIOS=0x69526000 SMBIOS 3.0=0x69524000 MOKvar=0x5ef02000 TPMEventLog=0x43d42020 Jan 23 16:10:19 localhost kernel: TPM Final Events table not present Jan 23 16:10:19 localhost kernel: secureboot: Secure boot disabled Jan 23 16:10:19 localhost kernel: SMBIOS 3.3.0 present. Jan 23 16:10:19 localhost kernel: DMI: Dell Inc. PowerEdge R650/0PYXKY, BIOS 1.3.8 08/31/2021 Jan 23 16:10:19 localhost kernel: tsc: Detected 2200.000 MHz processor Jan 23 16:10:19 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 16:10:19 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 16:10:19 localhost kernel: last_pfn = 0x4080000 max_arch_pfn = 0x10000000000 Jan 23 16:10:19 localhost kernel: MTRR default type: uncachable Jan 23 16:10:19 localhost kernel: MTRR fixed ranges enabled: Jan 23 16:10:19 localhost kernel: 00000-9FFFF write-back Jan 23 16:10:19 localhost kernel: A0000-BFFFF uncachable Jan 23 16:10:19 localhost kernel: C0000-FFFFF write-protect Jan 23 16:10:19 localhost kernel: MTRR variable ranges enabled: Jan 23 16:10:19 localhost kernel: 0 base 000000000000 mask 3F8000000000 write-back Jan 23 16:10:19 localhost kernel: 1 base 000080000000 mask 3FFF80000000 uncachable Jan 23 16:10:19 localhost kernel: 2 base 00007F000000 mask 3FFFFF000000 uncachable Jan 23 16:10:19 localhost kernel: 3 disabled Jan 23 16:10:19 localhost kernel: 4 disabled Jan 23 16:10:19 localhost kernel: 5 disabled Jan 23 16:10:19 localhost kernel: 6 disabled Jan 23 16:10:19 localhost kernel: 7 disabled Jan 23 16:10:19 localhost kernel: 8 disabled Jan 23 16:10:19 localhost kernel: 9 disabled Jan 23 16:10:19 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 16:10:19 localhost kernel: total RAM covered: 522224M Jan 23 16:10:19 localhost kernel: Found optimal setting for mtrr clean up Jan 23 16:10:19 localhost kernel: gran_size: 64K chunk_size: 32M num_reg: 9 lose cover RAM: 0G Jan 23 16:10:19 localhost kernel: e820: update [mem 0x7f000000-0xffffffff] usable ==> reserved Jan 23 16:10:19 localhost kernel: x2apic: enabled by BIOS, switching to x2apic ops Jan 23 16:10:19 localhost kernel: last_pfn = 0x6f800 max_arch_pfn = 0x10000000000 Jan 23 16:10:19 localhost kernel: Using GB pages for direct mapping Jan 23 16:10:19 localhost kernel: BRK [0x288a201000, 0x288a201fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a202000, 0x288a202fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a203000, 0x288a203fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a204000, 0x288a204fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a205000, 0x288a205fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a206000, 0x288a206fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a207000, 0x288a207fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a208000, 0x288a208fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a209000, 0x288a209fff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a20a000, 0x288a20afff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a20b000, 0x288a20bfff] PGTABLE Jan 23 16:10:19 localhost kernel: BRK [0x288a20c000, 0x288a20cfff] PGTABLE Jan 23 16:10:19 localhost kernel: RAMDISK: [mem 0x43d48000-0x49507fff] Jan 23 16:10:19 localhost kernel: Allocated new RAMDISK: [mem 0x407a83c000-0x407fffb433] Jan 23 16:10:19 localhost kernel: Move RAMDISK from [mem 0x43d48000-0x49507433] to [mem 0x407a83c000-0x407fffb433] Jan 23 16:10:19 localhost kernel: ACPI: Early table checksum verification disabled Jan 23 16:10:19 localhost kernel: ACPI: RSDP 0x000000006F7FE014 000024 (v02 DELL ) Jan 23 16:10:19 localhost kernel: ACPI: XSDT 0x000000006F563188 0000EC (v01 DELL PE_SC3 00000000 DELL 01000013) Jan 23 16:10:19 localhost kernel: ACPI: FACP 0x000000006F7F5000 000114 (v06 DELL PE_SC3 00000000 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: DSDT 0x000000006F770000 07EAB6 (v02 DELL PE_SC3 00000003 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: FACS 0x000000006F333000 000040 Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F7FB000 001466 (v02 INTEL RAS_ACPI 00000001 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F7FA000 000745 (v02 INTEL ADDRXLAT 00000001 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: MCEJ 0x000000006F7F9000 000130 (v01 INTEL 00000001 INTL 0100000D) Jan 23 16:10:19 localhost kernel: ACPI: EINJ 0x000000006F7F8000 000150 (v01 DELL PE_SC3 00000001 INTL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: BERT 0x000000006F7F7000 000030 (v01 DELL PE_SC3 00000001 INTL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: ERST 0x000000006F7F6000 000230 (v01 DELL PE_SC3 00000001 INTL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: HMAT 0x000000006F7F4000 000180 (v01 DELL PE_SC3 00000001 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: HPET 0x000000006F7F3000 000038 (v01 DELL PE_SC3 00000001 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: MCFG 0x000000006F7F2000 00003C (v01 DELL PE_SC3 00000001 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: MIGT 0x000000006F7F1000 000040 (v01 DELL PE_SC3 00000000 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: MSCT 0x000000006F7F0000 000090 (v01 DELL PE_SC3 00000001 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: WSMT 0x000000006F7EF000 000028 (v01 DELL PE_SC3 00000000 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: APIC 0x000000006F76F000 00075E (v04 DELL PE_SC3 00000000 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: SLIT 0x000000006F76D000 00102C (v01 DELL PE_SC3 00000001 DELL 01000013) Jan 23 16:10:19 localhost kernel: ACPI: SRAT 0x000000006F766000 006430 (v03 DELL PE_SC3 00000002 DELL 01000013) Jan 23 16:10:19 localhost kernel: ACPI: OEM4 0x000000006F5DE000 187A61 (v02 INTEL CPU CST 00003000 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F567000 0764A5 (v02 INTEL SSDT PM 00004000 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F566000 000A1F (v02 DELL PE_SC3 00000000 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: HEST 0x000000006F565000 00017C (v01 DELL PE_SC3 00000001 INTL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F564000 000623 (v02 DELL Tpm2Tabl 00001000 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: TPM2 0x000000006F7FD000 00004C (v04 DELL PE_SC3 00000002 DELL 01000013) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F55B000 007299 (v02 INTEL SpsNm 00000002 INTL 20210331) Jan 23 16:10:19 localhost kernel: ACPI: SSDT 0x000000006F55A000 000918 (v02 DELL PE_SC3 00000002 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: DMAR 0x000000006F559000 0001C0 (v01 DELL PE_SC3 00000001 DELL 00000001) Jan 23 16:10:19 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x6f7f5000-0x6f7f5113] Jan 23 16:10:19 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x6f770000-0x6f7eeab5] Jan 23 16:10:19 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x6f333000-0x6f33303f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f7fb000-0x6f7fc465] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f7fa000-0x6f7fa744] Jan 23 16:10:19 localhost kernel: ACPI: Reserving MCEJ table memory at [mem 0x6f7f9000-0x6f7f912f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving EINJ table memory at [mem 0x6f7f8000-0x6f7f814f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving BERT table memory at [mem 0x6f7f7000-0x6f7f702f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving ERST table memory at [mem 0x6f7f6000-0x6f7f622f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving HMAT table memory at [mem 0x6f7f4000-0x6f7f417f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving HPET table memory at [mem 0x6f7f3000-0x6f7f3037] Jan 23 16:10:19 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x6f7f2000-0x6f7f203b] Jan 23 16:10:19 localhost kernel: ACPI: Reserving MIGT table memory at [mem 0x6f7f1000-0x6f7f103f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving MSCT table memory at [mem 0x6f7f0000-0x6f7f008f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving WSMT table memory at [mem 0x6f7ef000-0x6f7ef027] Jan 23 16:10:19 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x6f76f000-0x6f76f75d] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SLIT table memory at [mem 0x6f76d000-0x6f76e02b] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SRAT table memory at [mem 0x6f766000-0x6f76c42f] Jan 23 16:10:19 localhost kernel: ACPI: Reserving OEM4 table memory at [mem 0x6f5de000-0x6f765a60] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f567000-0x6f5dd4a4] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f566000-0x6f566a1e] Jan 23 16:10:19 localhost kernel: ACPI: Reserving HEST table memory at [mem 0x6f565000-0x6f56517b] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f564000-0x6f564622] Jan 23 16:10:19 localhost kernel: ACPI: Reserving TPM2 table memory at [mem 0x6f7fd000-0x6f7fd04b] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f55b000-0x6f562298] Jan 23 16:10:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0x6f55a000-0x6f55a917] Jan 23 16:10:19 localhost kernel: ACPI: Reserving DMAR table memory at [mem 0x6f559000-0x6f5591bf] Jan 23 16:10:19 localhost kernel: ACPI: Local APIC address 0xfee00000 Jan 23 16:10:19 localhost kernel: Setting APIC routing to cluster x2apic. Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0000 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0001 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0002 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0003 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0004 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0005 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0006 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0007 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0008 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0009 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000a -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000b -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000c -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000d -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000e -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x000f -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0010 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0011 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0012 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0013 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0014 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0015 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0016 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0017 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0018 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0019 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001a -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001b -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001c -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001d -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001e -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x001f -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0020 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0021 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0022 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0023 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0024 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0025 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0026 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0027 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0028 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0029 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002a -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002b -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002c -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002d -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002e -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x002f -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0030 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0031 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0032 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0033 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0034 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0035 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0036 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 0 -> APIC 0x0037 -> Node 0 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0080 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0081 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0082 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0083 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0084 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0085 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0086 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0087 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0088 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0089 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008a -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008b -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008c -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008d -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008e -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x008f -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0090 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0091 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0092 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0093 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0094 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0095 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0096 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0097 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0098 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x0099 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009a -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009b -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009c -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009d -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009e -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x009f -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a0 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a1 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a2 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a3 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a4 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a5 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a6 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a7 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a8 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00a9 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00aa -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00ab -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00ac -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00ad -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00ae -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00af -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b0 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b1 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b2 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b3 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b4 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b5 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b6 -> Node 1 Jan 23 16:10:19 localhost kernel: SRAT: PXM 1 -> APIC 0x00b7 -> Node 1 Jan 23 16:10:19 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 23 16:10:19 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x207fffffff] Jan 23 16:10:19 localhost kernel: ACPI: SRAT: Node 1 PXM 1 [mem 0x2080000000-0x407fffffff] Jan 23 16:10:19 localhost kernel: NUMA: Initialized distance table, cnt=2 Jan 23 16:10:19 localhost kernel: NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0x207fffffff] -> [mem 0x00000000-0x207fffffff] Jan 23 16:10:19 localhost kernel: NODE_DATA(0) allocated [mem 0x207ffd5000-0x207fffffff] Jan 23 16:10:19 localhost kernel: NODE_DATA(1) allocated [mem 0x407a810000-0x407a83afff] Jan 23 16:10:19 localhost kernel: Zone ranges: Jan 23 16:10:19 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 16:10:19 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 16:10:19 localhost kernel: Normal [mem 0x0000000100000000-0x000000407fffffff] Jan 23 16:10:19 localhost kernel: Device empty Jan 23 16:10:19 localhost kernel: Movable zone start for each node Jan 23 16:10:19 localhost kernel: Early memory node ranges Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000008efff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000000090000-0x000000000009ffff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000000100000-0x0000000000ffffff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000004d92000-0x0000000043d47fff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000049508000-0x000000004afd1fff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x000000004bfd2000-0x000000004c1d2fff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x000000004c2d9000-0x000000005eefdfff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x000000006f7ff000-0x000000006f7fffff] Jan 23 16:10:19 localhost kernel: node 0: [mem 0x0000000100000000-0x000000207fffffff] Jan 23 16:10:19 localhost kernel: node 1: [mem 0x0000002080000000-0x000000407fffffff] Jan 23 16:10:19 localhost kernel: Zeroed struct page in unavailable ranges: 79803 pages Jan 23 16:10:19 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000207fffffff] Jan 23 16:10:19 localhost kernel: On node 0 totalpages: 33376325 Jan 23 16:10:19 localhost kernel: DMA zone: 64 pages used for memmap Jan 23 16:10:19 localhost kernel: DMA zone: 1182 pages reserved Jan 23 16:10:19 localhost kernel: DMA zone: 3998 pages, LIFO batch:0 Jan 23 16:10:19 localhost kernel: DMA32 zone: 5347 pages used for memmap Jan 23 16:10:19 localhost kernel: DMA32 zone: 342183 pages, LIFO batch:63 Jan 23 16:10:19 localhost kernel: Normal zone: 516096 pages used for memmap Jan 23 16:10:19 localhost kernel: Normal zone: 33030144 pages, LIFO batch:63 Jan 23 16:10:19 localhost kernel: Initmem setup node 1 [mem 0x0000002080000000-0x000000407fffffff] Jan 23 16:10:19 localhost kernel: On node 1 totalpages: 33554432 Jan 23 16:10:19 localhost kernel: Normal zone: 524288 pages used for memmap Jan 23 16:10:19 localhost kernel: Normal zone: 33554432 pages, LIFO batch:63 Jan 23 16:10:19 localhost kernel: ACPI: PM-Timer IO Port: 0x508 Jan 23 16:10:19 localhost kernel: ACPI: Local APIC address 0xfee00000 Jan 23 16:10:19 localhost kernel: ACPI: X2APIC_NMI (uid[0xffffffff] high edge lint[0x1]) Jan 23 16:10:19 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) Jan 23 16:10:19 localhost kernel: IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-119 Jan 23 16:10:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 16:10:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 16:10:19 localhost kernel: ACPI: IRQ0 used by override. Jan 23 16:10:19 localhost kernel: ACPI: IRQ9 used by override. Jan 23 16:10:19 localhost kernel: Using ACPI (MADT) for SMP configuration information Jan 23 16:10:19 localhost kernel: ACPI: HPET id: 0x8086a701 base: 0xfed00000 Jan 23 16:10:19 localhost kernel: smpboot: Allowing 112 CPUs, 0 hotplug CPUs Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x00000000-0x00000fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x0008f000-0x0008ffff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000fffff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x01000000-0x04d91fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43adb000-0x43adbfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43b8b000-0x43b8bfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43b8c000-0x43b8cfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43c3c000-0x43c3cfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43c3d000-0x43c3dfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43c6c000-0x43c6cfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43c6d000-0x43c6dfff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43c9f000-0x43c9ffff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43ca0000-0x43ca0fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43cd2000-0x43cd2fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43cd3000-0x43cd3fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d05000-0x43d05fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d06000-0x43d06fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d38000-0x43d38fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d39000-0x43d39fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d41000-0x43d41fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x43d48000-0x49507fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x4afd2000-0x4bfd1fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x4c1d3000-0x4c2d8fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x5eefe000-0x6e3fefff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x6e3ff000-0x6f3fefff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x6f3ff000-0x6f7fefff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x6f800000-0x8fffffff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0x90000000-0xfdffffff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0xfe000000-0xfe010fff] Jan 23 16:10:19 localhost kernel: PM: Registered nosave memory: [mem 0xfe011000-0xffffffff] Jan 23 16:10:19 localhost kernel: [mem 0x90000000-0xfdffffff] available for PCI devices Jan 23 16:10:19 localhost kernel: Booting paravirtualized kernel on bare hardware Jan 23 16:10:19 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 16:10:19 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:112 nr_cpu_ids:112 nr_node_ids:2 Jan 23 16:10:19 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Jan 23 16:10:19 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 000 002 004 006 008 010 012 014 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 016 018 020 022 024 026 028 030 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 032 034 036 038 040 042 044 046 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 048 050 052 054 056 058 060 062 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 064 066 068 070 072 074 076 078 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 080 082 084 086 088 090 092 094 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [0] 096 098 100 102 104 106 108 110 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 001 003 005 007 009 011 013 015 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 017 019 021 023 025 027 029 031 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 033 035 037 039 041 043 045 047 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 049 051 053 055 057 059 061 063 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 065 067 069 071 073 075 077 079 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 081 083 085 087 089 091 093 095 Jan 23 16:10:19 localhost kernel: pcpu-alloc: [1] 097 099 101 103 105 107 109 111 Jan 23 16:10:19 localhost kernel: Built 2 zonelists, mobility grouping on. Total pages: 65883780 Jan 23 16:10:19 localhost kernel: Policy zone: Normal Jan 23 16:10:19 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/vmlinuz-4.18.0-372.40.1.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/0 ip=dhcp root=UUID=b7d7393a-4ab5-4434-a099-e66267f4b07d rw rootflags=prjquota boot=UUID=6b5eaf26-520d-4e42-90f4-4869c15c705f Jan 23 16:10:19 localhost kernel: Specific versions of hardware are certified with Red Hat Enterprise Linux 8. Please see the list of hardware certified with Red Hat Enterprise Linux 8 at https://catalog.redhat.com. Jan 23 16:10:19 localhost kernel: Memory: 1454428K/267723028K available (12293K kernel code, 5866K rwdata, 8292K rodata, 2540K init, 14320K bss, 4549784K reserved, 0K cma-reserved) Jan 23 16:10:19 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=112, Nodes=2 Jan 23 16:10:19 localhost kernel: ftrace: allocating 40025 entries in 157 pages Jan 23 16:10:19 localhost kernel: ftrace: allocated 157 pages with 5 groups Jan 23 16:10:19 localhost kernel: rcu: Hierarchical RCU implementation. Jan 23 16:10:19 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=112. Jan 23 16:10:19 localhost kernel: Rude variant of Tasks RCU enabled. Jan 23 16:10:19 localhost kernel: Tracing variant of Tasks RCU enabled. Jan 23 16:10:19 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 16:10:19 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=112 Jan 23 16:10:19 localhost kernel: NR_IRQS: 524544, nr_irqs: 2952, preallocated irqs: 16 Jan 23 16:10:19 localhost kernel: random: crng done (trusting CPU's manufacturer) Jan 23 16:10:19 localhost kernel: Console: colour dummy device 80x25 Jan 23 16:10:19 localhost kernel: printk: console [tty0] enabled Jan 23 16:10:19 localhost kernel: mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl Jan 23 16:10:19 localhost kernel: ACPI: Core revision 20210604 Jan 23 16:10:19 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 23 16:10:19 localhost kernel: hpet clockevent registered Jan 23 16:10:19 localhost kernel: APIC: Switch to symmetric I/O mode setup Jan 23 16:10:19 localhost kernel: DMAR: Host address width 46 Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000d0ffc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar0: reg_base_addr d0ffc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000dbbfc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar1: reg_base_addr dbbfc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000e67fc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar2: reg_base_addr e67fc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000f13fc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar3: reg_base_addr f13fc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000fb7fc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar4: reg_base_addr fb7fc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000a63fc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar5: reg_base_addr a63fc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000b0ffc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar6: reg_base_addr b0ffc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000bbbfc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar7: reg_base_addr bbbfc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x000000c5ffc000 flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: dmar8: reg_base_addr c5ffc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: DRHD base: 0x0000009b7fc000 flags: 0x1 Jan 23 16:10:19 localhost kernel: DMAR: dmar9: reg_base_addr 9b7fc000 ver 4:0 cap 8ed008c40780466 ecap 60000f050df Jan 23 16:10:19 localhost kernel: DMAR: RMRR base: 0x00000069422000 end: 0x00000069424fff Jan 23 16:10:19 localhost kernel: DMAR: ATSR flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR: ATSR flags: 0x0 Jan 23 16:10:19 localhost kernel: DMAR-IR: IOAPIC id 8 under DRHD base 0x9b7fc000 IOMMU 9 Jan 23 16:10:19 localhost kernel: DMAR-IR: HPET id 0 under DRHD base 0x9b7fc000 Jan 23 16:10:19 localhost kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 23 16:10:19 localhost kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 23 16:10:19 localhost kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 16:10:19 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1fb633008a4, max_idle_ns: 440795292230 ns Jan 23 16:10:19 localhost kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4400.00 BogoMIPS (lpj=2200000) Jan 23 16:10:19 localhost kernel: pid_max: default: 114688 minimum: 896 Jan 23 16:10:19 localhost kernel: efi: memattr: Entry attributes invalid: RO and XP bits both cleared Jan 23 16:10:19 localhost kernel: efi: memattr: ! 0x000001000000-0x000004d91fff [Runtime Code |RUN| | | | | | | | | | | | ] Jan 23 16:10:19 localhost kernel: LSM: Security Framework initializing Jan 23 16:10:19 localhost kernel: Yama: becoming mindful. Jan 23 16:10:19 localhost kernel: SELinux: Initializing. Jan 23 16:10:19 localhost kernel: LSM support for eBPF active Jan 23 16:10:19 localhost kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: x86/cpu: SGX disabled by BIOS. Jan 23 16:10:19 localhost kernel: x86/tme: not enabled by BIOS Jan 23 16:10:19 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 16:10:19 localhost kernel: CPU0: Thermal monitoring enabled (TM1) Jan 23 16:10:19 localhost kernel: process: using mwait in idle threads Jan 23 16:10:19 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 16:10:19 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 16:10:19 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 16:10:19 localhost kernel: Spectre V2 : Mitigation: Enhanced IBRS Jan 23 16:10:19 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 23 16:10:19 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 16:10:19 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 16:10:19 localhost kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 16:10:19 localhost kernel: Freeing SMP alternatives memory: 36K Jan 23 16:10:19 localhost kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1303 Jan 23 16:10:19 localhost kernel: TSC deadline timer enabled Jan 23 16:10:19 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) Gold 6330N CPU @ 2.20GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 16:10:19 localhost kernel: Performance Events: PEBS fmt4+-baseline, AnyThread deprecated, Icelake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 23 16:10:19 localhost kernel: ... version: 5 Jan 23 16:10:19 localhost kernel: ... bit width: 48 Jan 23 16:10:19 localhost kernel: ... generic registers: 8 Jan 23 16:10:19 localhost kernel: ... value mask: 0000ffffffffffff Jan 23 16:10:19 localhost kernel: ... max period: 00007fffffffffff Jan 23 16:10:19 localhost kernel: ... fixed-purpose events: 4 Jan 23 16:10:19 localhost kernel: ... event mask: 0001000f000000ff Jan 23 16:10:19 localhost kernel: rcu: Hierarchical SRCU implementation. Jan 23 16:10:19 localhost kernel: smp: Bringing up secondary CPUs ... Jan 23 16:10:19 localhost kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 23 16:10:19 localhost kernel: x86: Booting SMP configuration: Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #1 Jan 23 16:10:19 localhost kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 23 16:10:19 localhost kernel: Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #2 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #3 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #4 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #5 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #6 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #7 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #8 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #9 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #10 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #11 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #12 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #13 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #14 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #15 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #16 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #17 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #18 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #19 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #20 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #21 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #22 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #23 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #24 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #25 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #26 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #27 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #28 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #29 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #30 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #31 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #32 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #33 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #34 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #35 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #36 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #37 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #38 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #39 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #40 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #41 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #42 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #43 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #44 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #45 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #46 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #47 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #48 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #49 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #50 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #51 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #52 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #53 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #54 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #55 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #56 Jan 23 16:10:19 localhost kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 16:10:19 localhost kernel: Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #57 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #58 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #59 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #60 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #61 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #62 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #63 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #64 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #65 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #66 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #67 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #68 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #69 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #70 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #71 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #72 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #73 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #74 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #75 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #76 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #77 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #78 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #79 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #80 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #81 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #82 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #83 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #84 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #85 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #86 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #87 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #88 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #89 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #90 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #91 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #92 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #93 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #94 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #95 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #96 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #97 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #98 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #99 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #100 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #101 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #102 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #103 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #104 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #105 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #106 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #107 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #108 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #109 Jan 23 16:10:19 localhost kernel: .... node #0, CPUs: #110 Jan 23 16:10:19 localhost kernel: .... node #1, CPUs: #111 Jan 23 16:10:19 localhost kernel: smp: Brought up 2 nodes, 112 CPUs Jan 23 16:10:19 localhost kernel: smpboot: Max logical packages: 2 Jan 23 16:10:19 localhost kernel: smpboot: Total of 112 processors activated (494164.83 BogoMIPS) Jan 23 16:10:19 localhost kernel: node 0 deferred pages initialised in 35ms Jan 23 16:10:19 localhost kernel: node 1 deferred pages initialised in 37ms Jan 23 16:10:19 localhost kernel: devtmpfs: initialized Jan 23 16:10:19 localhost kernel: x86/mm: Memory block size: 2048MB Jan 23 16:10:19 localhost kernel: ACPI: PM: Registering ACPI NVS region [mem 0x4afd2000-0x4bfd1fff] (16777216 bytes) Jan 23 16:10:19 localhost kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6e3ff000-0x6f3fefff] (16777216 bytes) Jan 23 16:10:19 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 16:10:19 localhost kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: pinctrl core: initialized pinctrl subsystem Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 16 Jan 23 16:10:19 localhost kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jan 23 16:10:19 localhost kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 16:10:19 localhost kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 16:10:19 localhost kernel: audit: initializing netlink subsys (disabled) Jan 23 16:10:19 localhost kernel: audit: type=2000 audit(1674490214.348:1): state=initialized audit_enabled=0 res=1 Jan 23 16:10:19 localhost kernel: cpuidle: using governor menu Jan 23 16:10:19 localhost kernel: ACPI FADT declares the system doesn't support PCIe ASPM, so disable it Jan 23 16:10:19 localhost kernel: ACPI: bus type PCI registered Jan 23 16:10:19 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 16:10:19 localhost kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0x80000000-0x8fffffff] (base 0x80000000) Jan 23 16:10:19 localhost kernel: PCI: MMCONFIG at [mem 0x80000000-0x8fffffff] reserved in E820 Jan 23 16:10:19 localhost kernel: PCI: Using configuration type 1 for base access Jan 23 16:10:19 localhost kernel: PCI: Dell System detected, enabling pci=bfsort. Jan 23 16:10:19 localhost kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 23 16:10:19 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 16:10:19 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 16:10:19 localhost kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 16:10:19 localhost kernel: fbcon: Taking over console Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Module Device) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Processor Device) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jan 23 16:10:19 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jan 23 16:10:19 localhost kernel: ACPI: 8 ACPI AML tables successfully acquired and loaded Jan 23 16:10:19 localhost kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 23 16:10:19 localhost kernel: ACPI: Dynamic OEM Table Load: Jan 23 16:10:19 localhost kernel: ACPI: Interpreter enabled Jan 23 16:10:19 localhost kernel: ACPI: PM: (supports S0 S5) Jan 23 16:10:19 localhost kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 16:10:19 localhost kernel: HEST: Table parsing has been initialized. Jan 23 16:10:19 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 16:10:19 localhost kernel: ACPI: Enabled 4 GPEs in block 00 to 7F Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-15]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug AER LTR DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:00 Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [io 0x1000-0x4fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000c8000-0x000cffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xfe010000-0xfe010fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0x9b7fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x200000000000-0x203fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-15] Jan 23 16:10:19 localhost kernel: pci 0000:00:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:00:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:00:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:00:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:00:02.0: [8086:09a6] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0x92dfc000-0x92dfdfff] Jan 23 16:10:19 localhost kernel: pci 0000:00:02.1: [8086:09a7] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:00:02.1: reg 0x10: [mem 0x92d00000-0x92d7ffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:02.1: reg 0x14: [mem 0x92c80000-0x92cfffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:02.4: [8086:3456] type 00 class 0x130000 Jan 23 16:10:19 localhost kernel: pci 0000:00:02.4: reg 0x10: [mem 0x92a00000-0x92afffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:02.4: reg 0x18: [mem 0x92df0000-0x92df3fff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:02.4: reg 0x20: [mem 0x92dc0000-0x92ddffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.0: [8086:a1ec] type 00 class 0xff0000 Jan 23 16:10:19 localhost kernel: pci 0000:00:11.0: device has non-compliant BARs; disabling IO/MEM decoding Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: [8086:a1d2] type 00 class 0x010601 Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x10: [mem 0x92dfa000-0x92dfbfff] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x14: [mem 0x92e05000-0x92e050ff] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x18: [io 0x2068-0x206f] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x1c: [io 0x2074-0x2077] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x20: [io 0x2040-0x205f] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: reg 0x24: [mem 0x92b80000-0x92bfffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:11.5: PME# supported from D3hot Jan 23 16:10:19 localhost kernel: pci 0000:00:14.0: [8086:a1af] type 00 class 0x0c0330 Jan 23 16:10:19 localhost kernel: pci 0000:00:14.0: reg 0x10: [mem 0x92de0000-0x92deffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:00:14.2: [8086:a1b1] type 00 class 0x118000 Jan 23 16:10:19 localhost kernel: pci 0000:00:14.2: reg 0x10: [mem 0x92e02000-0x92e02fff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:16.0: [8086:a1ba] type 00 class 0x078000 Jan 23 16:10:19 localhost kernel: pci 0000:00:16.0: reg 0x10: [mem 0x92e01000-0x92e01fff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 23 16:10:19 localhost kernel: pci 0000:00:16.1: [8086:a1bb] type 00 class 0x078000 Jan 23 16:10:19 localhost kernel: pci 0000:00:16.1: reg 0x10: [mem 0x92e00000-0x92e00fff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 23 16:10:19 localhost kernel: pci 0000:00:16.4: [8086:a1be] type 00 class 0x078000 Jan 23 16:10:19 localhost kernel: pci 0000:00:16.4: reg 0x10: [mem 0x92dff000-0x92dfffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: [8086:a182] type 00 class 0x010601 Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x10: [mem 0x92df8000-0x92df9fff] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x14: [mem 0x92e04000-0x92e040ff] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x18: [io 0x2060-0x2067] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x1c: [io 0x2070-0x2073] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x20: [io 0x2020-0x203f] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: reg 0x24: [mem 0x92b00000-0x92b7ffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.0: [8086:a190] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: [8086:a194] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: [8086:a195] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.0: [8086:a1cb] type 00 class 0x060100 Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.2: [8086:a1a1] type 00 class 0x058000 Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.2: reg 0x10: [mem 0x92df4000-0x92df7fff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.4: [8086:a1a3] type 00 class 0x0c0500 Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x92dfe000-0x92dfe0ff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.4: reg 0x20: [io 0x2000-0x201f] Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.5: [8086:a1a4] type 00 class 0x0c8000 Jan 23 16:10:19 localhost kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.0: PCI bridge to [bus 01] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: [1556:be00] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: PCI bridge to [bus 02-03] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: bridge window [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: bridge window [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:03: extended config space not accessible Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: [102b:0536] type 00 class 0x030000 Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: reg 0x10: [mem 0x91000000-0x91ffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: reg 0x14: [mem 0x92808000-0x9280bfff] Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: reg 0x18: [mem 0x92000000-0x927fffff] Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: BAR 0: assigned to efifb Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: PCI bridge to [bus 03] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: bridge window [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: bridge window [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: [14e4:165f] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: reg 0x10: [mem 0x92930000-0x9293ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: reg 0x18: [mem 0x92940000-0x9294ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: reg 0x20: [mem 0x92950000-0x9295ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:1c.5 (capable of 8.000 Gb/s with 5.0 GT/s PCIe x2 link) Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: [14e4:165f] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: reg 0x10: [mem 0x92900000-0x9290ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: reg 0x18: [mem 0x92910000-0x9291ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: reg 0x20: [mem 0x92920000-0x9292ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: PCI bridge to [bus 04] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: bridge window [mem 0x92900000-0x929fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC01] (domain 0000 [bus 16-2f]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:01: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:01: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:16 Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: root bus resource [io 0x5000-0x6fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: root bus resource [mem 0x9b800000-0xa63fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: root bus resource [mem 0x204000000000-0x207fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: root bus resource [bus 16-2f] Jan 23 16:10:19 localhost kernel: pci 0000:16:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:16:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:16:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:16:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC02] (domain 0000 [bus 30-49]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:02: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:02: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:30 Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: root bus resource [io 0x7000-0x8fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: root bus resource [mem 0xa6400000-0xb0ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: root bus resource [mem 0x208000000000-0x20bfffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: root bus resource [bus 30-49] Jan 23 16:10:19 localhost kernel: pci 0000:30:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:30:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:30:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:30:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: [8086:347c] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: reg 0x10: [mem 0x20bffff00000-0x20bffff1ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: [8086:159b] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: reg 0x10: [mem 0xaa000000-0xabffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: reg 0x1c: [mem 0xac010000-0xac01ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: [8086:159b] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: reg 0x10: [mem 0xa8000000-0xa9ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: reg 0x1c: [mem 0xac000000-0xac00ffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: reg 0x30: [mem 0xfff00000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: PCI bridge to [bus 31] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: bridge window [mem 0xa8000000-0xac0fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC04] (domain 0000 [bus 4a-63]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:04: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:04: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:04: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:4a Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: root bus resource [io 0x9000-0x9fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: root bus resource [mem 0xb1000000-0xbbbfffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: root bus resource [mem 0x20c000000000-0x20ffffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: root bus resource [bus 4a-63] Jan 23 16:10:19 localhost kernel: pci 0000:4a:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:4a:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:4a:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:4a:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC05] (domain 0000 [bus 64-7d]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:05: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:05: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:05: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:64 Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: root bus resource [io 0xa000-0xafff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: root bus resource [mem 0xbbc00000-0xc5ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: root bus resource [mem 0x210000000000-0x213fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: root bus resource [bus 64-7d] Jan 23 16:10:19 localhost kernel: pci 0000:64:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:64:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:64:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:64:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: [8086:347a] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: reg 0x10: [mem 0x213ffff40000-0x213ffff5ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: enabling Extended Tags Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: [8086:347b] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: reg 0x10: [mem 0x213ffff20000-0x213ffff3ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: enabling Extended Tags Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: [8086:347c] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: reg 0x10: [mem 0x213ffff00000-0x213ffff1ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: PCI bridge to [bus 65] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: bridge window [mem 0xbc000000-0xbc3fffff] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: PCI bridge to [bus 66] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: bridge window [mem 0xbbc00000-0xbbffffff] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: [1000:0014] type 00 class 0x010400 Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: reg 0x10: [mem 0xbc400000-0xbc4fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: reg 0x18: [mem 0xbc500000-0xbc5fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: reg 0x20: [mem 0xbc600000-0xbc6fffff] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: reg 0x24: [io 0xa000-0xa0ff] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: supports D1 D2 Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: PCI bridge to [bus 67] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [io 0xa000-0xafff] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [mem 0xbc600000-0xbc6fffff] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [mem 0xbc400000-0xbc5fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [UC06] (domain 0000 [bus 7e]) Jan 23 16:10:19 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:00: _OSC: platform does not support [SHPCHotplug AER LTR DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:00: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:7e Jan 23 16:10:19 localhost kernel: pci_bus 0000:7e: root bus resource [bus 7e] Jan 23 16:10:19 localhost kernel: pci 0000:7e:00.0: [8086:3450] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:00.1: [8086:3451] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:00.2: [8086:3452] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:00.3: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:00.5: [8086:3455] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:02.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:02.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:02.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:03.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:03.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:03.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:04.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:04.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:04.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:04.3: [8086:3443] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:05.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:05.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:05.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:06.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:06.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:06.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:07.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:07.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:07.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0b.0: [8086:3448] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0b.1: [8086:3448] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0b.2: [8086:344b] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0c.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0d.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0e.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:0f.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:1a.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:1b.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:1c.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:7e:1d.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci_bus 0000:7e: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [UC07] (domain 0000 [bus 7f]) Jan 23 16:10:19 localhost kernel: acpi PNP0A03:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:01: _OSC: platform does not support [SHPCHotplug AER LTR DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:01: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:01: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:7f Jan 23 16:10:19 localhost kernel: pci_bus 0000:7f: root bus resource [bus 7f] Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:00.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:01.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:02.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:03.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:04.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0a.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0b.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0c.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0d.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:0e.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1d.0: [8086:344f] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1d.1: [8086:3457] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.0: [8086:3458] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.1: [8086:3459] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.2: [8086:345a] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.3: [8086:345b] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.4: [8086:345c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.5: [8086:345d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.6: [8086:345e] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:7f:1e.7: [8086:345f] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:7f: on NUMA node 0 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC06] (domain 0000 [bus 80-96]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:06: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:06: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:06: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:80 Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: root bus resource [io 0xb000-0xbfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: root bus resource [mem 0xc6800000-0xd0ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: root bus resource [mem 0x214000000000-0x217fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: root bus resource [bus 80-96] Jan 23 16:10:19 localhost kernel: pci 0000:80:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:80:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:80:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:80:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:80:02.0: [8086:09a6] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:80:02.0: reg 0x10: [mem 0xc6aa4000-0xc6aa5fff] Jan 23 16:10:19 localhost kernel: pci 0000:80:02.1: [8086:09a7] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:80:02.1: reg 0x10: [mem 0xc6a00000-0xc6a7ffff] Jan 23 16:10:19 localhost kernel: pci 0000:80:02.1: reg 0x14: [mem 0xc6980000-0xc69fffff] Jan 23 16:10:19 localhost kernel: pci 0000:80:02.4: [8086:3456] type 00 class 0x130000 Jan 23 16:10:19 localhost kernel: pci 0000:80:02.4: reg 0x10: [mem 0xc6800000-0xc68fffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:80:02.4: reg 0x18: [mem 0xc6aa0000-0xc6aa3fff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:80:02.4: reg 0x20: [mem 0xc6a80000-0xc6a9ffff 64bit] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC07] (domain 0000 [bus 97-af]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:07: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:07: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:07: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:97 Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: root bus resource [io 0xc000-0xcfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: root bus resource [mem 0xd1000000-0xdbbfffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: root bus resource [mem 0x218000000000-0x21bfffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: root bus resource [bus 97-af] Jan 23 16:10:19 localhost kernel: pci 0000:97:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:97:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:97:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:97:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC08] (domain 0000 [bus b0-c8]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:08: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:08: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:08: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:08: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:b0 Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: root bus resource [io 0xd000-0xdfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: root bus resource [mem 0xdbc00000-0xe67fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: root bus resource [mem 0x21c000000000-0x21ffffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: root bus resource [bus b0-c8] Jan 23 16:10:19 localhost kernel: pci 0000:b0:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:b0:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:b0:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:b0:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC10] (domain 0000 [bus c9-e1]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0a: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0a: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0a: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0a: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:c9 Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: root bus resource [io 0xe000-0xefff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: root bus resource [mem 0xe6800000-0xf13fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: root bus resource [mem 0x220000000000-0x223fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: root bus resource [bus c9-e1] Jan 23 16:10:19 localhost kernel: pci 0000:c9:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:c9:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:c9:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:c9:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: [8086:347a] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: reg 0x10: [mem 0x223ffff00000-0x223ffff1ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: [15b3:101d] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: reg 0x10: [mem 0xea000000-0xebffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: PME# supported from D3cold Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: [15b3:101d] type 00 class 0x020000 Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: reg 0x10: [mem 0xe8000000-0xe9ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: reg 0x30: [mem 0xfff00000-0xffffffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: PME# supported from D3cold Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: PCI bridge to [bus ca] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: bridge window [mem 0xe8000000-0xebffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [PC11] (domain 0000 [bus e2-fa]) Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0b: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0b: _OSC: platform does not support [SHPCHotplug AER DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0b: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 23 16:10:19 localhost kernel: acpi PNP0A08:0b: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:e2 Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: root bus resource [io 0xf000-0xffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: root bus resource [mem 0xf1400000-0xfb7fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: root bus resource [mem 0x224000000000-0x227fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: root bus resource [bus e2-fa] Jan 23 16:10:19 localhost kernel: pci 0000:e2:00.0: [8086:09a2] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:00.1: [8086:09a4] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:00.2: [8086:09a3] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:00.4: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: [8086:347a] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: reg 0x10: [mem 0x227ffff20000-0x227ffff3ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: enabling Extended Tags Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: [8086:347b] type 01 class 0x060400 Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: reg 0x10: [mem 0x227ffff00000-0x227ffff1ffff 64bit] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: enabling Extended Tags Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: PME# supported from D0 D3hot D3cold Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: PCI bridge to [bus e3] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: bridge window [mem 0xf1800000-0xf1bfffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: PCI bridge to [bus e4] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [mem 0xf1400000-0xf17fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [UC16] (domain 0000 [bus fe]) Jan 23 16:10:19 localhost kernel: acpi PNP0A03:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:02: _OSC: platform does not support [SHPCHotplug AER LTR DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:02: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:02: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:fe Jan 23 16:10:19 localhost kernel: pci_bus 0000:fe: root bus resource [bus fe] Jan 23 16:10:19 localhost kernel: pci 0000:fe:00.0: [8086:3450] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:00.1: [8086:3451] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:00.2: [8086:3452] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:00.3: [8086:0998] type 00 class 0x060000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:00.5: [8086:3455] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:02.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:02.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:02.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:03.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:03.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:03.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:04.0: [8086:3440] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:04.1: [8086:3441] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:04.2: [8086:3442] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:04.3: [8086:3443] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:05.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:05.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:05.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:06.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:06.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:06.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:07.0: [8086:3445] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:07.1: [8086:3446] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:07.2: [8086:3447] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0b.0: [8086:3448] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0b.1: [8086:3448] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0b.2: [8086:344b] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0c.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0d.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0e.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:0f.0: [8086:344a] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:1a.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:1b.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:1c.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci 0000:fe:1d.0: [8086:2880] type 00 class 0x110100 Jan 23 16:10:19 localhost kernel: pci_bus 0000:fe: on NUMA node 1 Jan 23 16:10:19 localhost kernel: ACPI: PCI Root Bridge [UC17] (domain 0000 [bus ff]) Jan 23 16:10:19 localhost kernel: acpi PNP0A03:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:03: _OSC: platform does not support [SHPCHotplug AER LTR DPC] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:03: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 23 16:10:19 localhost kernel: acpi PNP0A03:03: FADT indicates ASPM is unsupported, using BIOS configuration Jan 23 16:10:19 localhost kernel: PCI host bridge to bus 0000:ff Jan 23 16:10:19 localhost kernel: pci_bus 0000:ff: root bus resource [bus ff] Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:00.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:01.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:02.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:03.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.0: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.1: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.2: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.3: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.4: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.5: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.6: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:04.7: [8086:344c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0a.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0b.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0c.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0d.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.0: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.1: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.2: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.3: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.4: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.5: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.6: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:0e.7: [8086:344d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1d.0: [8086:344f] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1d.1: [8086:3457] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.0: [8086:3458] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.1: [8086:3459] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.2: [8086:345a] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.3: [8086:345b] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.4: [8086:345c] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.5: [8086:345d] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.6: [8086:345e] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci 0000:ff:1e.7: [8086:345f] type 00 class 0x088000 Jan 23 16:10:19 localhost kernel: pci_bus 0000:ff: on NUMA node 1 Jan 23 16:10:19 localhost kernel: iommu: Default domain type: Passthrough Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: vgaarb: setting as boot VGA device Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: vgaarb: bridge control possible Jan 23 16:10:19 localhost kernel: vgaarb: loaded Jan 23 16:10:19 localhost kernel: SCSI subsystem initialized Jan 23 16:10:19 localhost kernel: ACPI: bus type USB registered Jan 23 16:10:19 localhost kernel: usbcore: registered new interface driver usbfs Jan 23 16:10:19 localhost kernel: usbcore: registered new interface driver hub Jan 23 16:10:19 localhost kernel: usbcore: registered new device driver usb Jan 23 16:10:19 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 16:10:19 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 16:10:19 localhost kernel: PTP clock support registered Jan 23 16:10:19 localhost kernel: EDAC MC: Ver: 3.0.0 Jan 23 16:10:19 localhost kernel: Registered efivars operations Jan 23 16:10:19 localhost kernel: PCI: Using ACPI for IRQ routing Jan 23 16:10:19 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x0008f000-0x0008ffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x01000000-0x03ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43adb020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43b8c020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43c3d020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43c6d020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43ca0020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43cd3020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43d06020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43d39020-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x43d48000-0x43ffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x4afd2000-0x4bffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x4c1d3000-0x4fffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x5eefe000-0x5fffffff] Jan 23 16:10:19 localhost kernel: e820: reserve RAM buffer [mem 0x6f800000-0x6fffffff] Jan 23 16:10:19 localhost kernel: NetLabel: Initializing Jan 23 16:10:19 localhost kernel: NetLabel: domain hash size = 128 Jan 23 16:10:19 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Jan 23 16:10:19 localhost kernel: NetLabel: unlabeled traffic allowed by default Jan 23 16:10:19 localhost kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 23 16:10:19 localhost kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 23 16:10:19 localhost kernel: clocksource: Switched to clocksource tsc-early Jan 23 16:10:19 localhost kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 16:10:19 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 16:10:19 localhost kernel: pnp: PnP ACPI init Jan 23 16:10:19 localhost kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) Jan 23 16:10:19 localhost kernel: system 00:01: [io 0x0500-0x05fe] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: [io 0x0400-0x041f] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: [io 0x0600-0x061f] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: [io 0x0ca0-0x0ca1] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: [io 0x0ca4-0x0ca6] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: [mem 0xff000000-0xffffffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active) Jan 23 16:10:19 localhost kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0501 (active) Jan 23 16:10:19 localhost kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfd000000-0xfdabffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfdad0000-0xfdadffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfdb00000-0xfdffffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfe000000-0xfe00ffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfe011000-0xfe01ffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfe036000-0xfe03bfff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfe03d000-0xfe3fffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: [mem 0xfe410000-0xfe7fffff] has been reserved Jan 23 16:10:19 localhost kernel: system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active) Jan 23 16:10:19 localhost kernel: system 00:05: [io 0x1000-0x10fe] has been reserved Jan 23 16:10:19 localhost kernel: system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active) Jan 23 16:10:19 localhost kernel: pnp: PnP ACPI: found 6 devices Jan 23 16:10:19 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: BAR 14: assigned [mem 0x90000000-0x900fffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.0: PCI bridge to [bus 01] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: PCI bridge to [bus 03] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: bridge window [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci 0000:02:00.0: bridge window [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: PCI bridge to [bus 02-03] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: bridge window [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.4: bridge window [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.0: BAR 6: assigned [mem 0x90000000-0x9003ffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:04:00.1: BAR 6: assigned [mem 0x90040000-0x9007ffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: PCI bridge to [bus 04] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: bridge window [mem 0x90000000-0x900fffff] Jan 23 16:10:19 localhost kernel: pci 0000:00:1c.5: bridge window [mem 0x92900000-0x929fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 5 [io 0x1000-0x4fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x000c8000-0x000cffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xfe010000-0xfe010fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x90000000-0x9b7fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:00: resource 10 [mem 0x200000000000-0x203fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:02: resource 1 [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:02: resource 2 [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:03: resource 1 [mem 0x92000000-0x928fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:03: resource 2 [mem 0x91000000-0x91ffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:04: resource 1 [mem 0x90000000-0x900fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:04: resource 2 [mem 0x92900000-0x929fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: resource 4 [io 0x5000-0x6fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: resource 5 [mem 0x9b800000-0xa63fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:16: resource 6 [mem 0x204000000000-0x207fffffffff window] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: BAR 14: assigned [mem 0xa6400000-0xa65fffff] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.0: BAR 6: assigned [mem 0xa6400000-0xa64fffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:31:00.1: BAR 6: assigned [mem 0xa6500000-0xa65fffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: PCI bridge to [bus 31] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: bridge window [mem 0xa6400000-0xa65fffff] Jan 23 16:10:19 localhost kernel: pci 0000:30:04.0: bridge window [mem 0xa8000000-0xac0fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: resource 4 [io 0x7000-0x8fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: resource 5 [mem 0xa6400000-0xb0ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:30: resource 6 [mem 0x208000000000-0x20bfffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:31: resource 1 [mem 0xa6400000-0xa65fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:31: resource 2 [mem 0xa8000000-0xac0fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: resource 4 [io 0x9000-0x9fff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: resource 5 [mem 0xb1000000-0xbbbfffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:4a: resource 6 [mem 0x20c000000000-0x20ffffffffff window] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: bridge window [io 0x1000-0x0fff] to [bus 65] add_size 1000 Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 65] add_size 200000 add_align 100000 Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: bridge window [io 0x1000-0x0fff] to [bus 66] add_size 1000 Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 66] add_size 200000 add_align 100000 Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: BAR 15: assigned [mem 0x210000000000-0x2100001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: BAR 15: assigned [mem 0x210000200000-0x2100003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: PCI bridge to [bus 65] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: bridge window [mem 0xbc000000-0xbc3fffff] Jan 23 16:10:19 localhost kernel: pci 0000:64:02.0: bridge window [mem 0x210000000000-0x2100001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: PCI bridge to [bus 66] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: bridge window [mem 0xbbc00000-0xbbffffff] Jan 23 16:10:19 localhost kernel: pci 0000:64:03.0: bridge window [mem 0x210000200000-0x2100003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: BAR 6: no space for [mem size 0x00100000 pref] Jan 23 16:10:19 localhost kernel: pci 0000:67:00.0: BAR 6: failed to assign [mem size 0x00100000 pref] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: PCI bridge to [bus 67] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [io 0xa000-0xafff] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [mem 0xbc600000-0xbc6fffff] Jan 23 16:10:19 localhost kernel: pci 0000:64:04.0: bridge window [mem 0xbc400000-0xbc5fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: resource 4 [io 0xa000-0xafff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: resource 5 [mem 0xbbc00000-0xc5ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:64: resource 6 [mem 0x210000000000-0x213fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:65: resource 1 [mem 0xbc000000-0xbc3fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:65: resource 2 [mem 0x210000000000-0x2100001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:66: resource 1 [mem 0xbbc00000-0xbbffffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:66: resource 2 [mem 0x210000200000-0x2100003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:67: resource 0 [io 0xa000-0xafff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:67: resource 1 [mem 0xbc600000-0xbc6fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:67: resource 2 [mem 0xbc400000-0xbc5fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: resource 4 [io 0xb000-0xbfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: resource 5 [mem 0xc6800000-0xd0ffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:80: resource 6 [mem 0x214000000000-0x217fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: resource 4 [io 0xc000-0xcfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: resource 5 [mem 0xd1000000-0xdbbfffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:97: resource 6 [mem 0x218000000000-0x21bfffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: resource 4 [io 0xd000-0xdfff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: resource 5 [mem 0xdbc00000-0xe67fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:b0: resource 6 [mem 0x21c000000000-0x21ffffffffff window] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: BAR 14: assigned [mem 0xe6800000-0xe69fffff] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.0: BAR 6: assigned [mem 0xe6800000-0xe68fffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:ca:00.1: BAR 6: assigned [mem 0xe6900000-0xe69fffff pref] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: PCI bridge to [bus ca] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: bridge window [mem 0xe6800000-0xe69fffff] Jan 23 16:10:19 localhost kernel: pci 0000:c9:02.0: bridge window [mem 0xe8000000-0xebffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: resource 4 [io 0xe000-0xefff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: resource 5 [mem 0xe6800000-0xf13fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:c9: resource 6 [mem 0x220000000000-0x223fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:ca: resource 1 [mem 0xe6800000-0xe69fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:ca: resource 2 [mem 0xe8000000-0xebffffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: bridge window [io 0x1000-0x0fff] to [bus e3] add_size 1000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus e3] add_size 200000 add_align 100000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [io 0x1000-0x0fff] to [bus e4] add_size 1000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus e4] add_size 200000 add_align 100000 Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: BAR 15: assigned [mem 0x224000000000-0x2240001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: BAR 15: assigned [mem 0x224000200000-0x2240003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: BAR 13: assigned [io 0xf000-0xffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: BAR 13: assigned [io 0xf000-0xffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: BAR 13: no space for [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: BAR 13: failed to assign [io size 0x1000] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: PCI bridge to [bus e3] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: bridge window [mem 0xf1800000-0xf1bfffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:02.0: bridge window [mem 0x224000000000-0x2240001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: PCI bridge to [bus e4] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [io 0xf000-0xffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [mem 0xf1400000-0xf17fffff] Jan 23 16:10:19 localhost kernel: pci 0000:e2:03.0: bridge window [mem 0x224000200000-0x2240003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: resource 4 [io 0xf000-0xffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: resource 5 [mem 0xf1400000-0xfb7fffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e2: resource 6 [mem 0x224000000000-0x227fffffffff window] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e3: resource 1 [mem 0xf1800000-0xf1bfffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e3: resource 2 [mem 0x224000000000-0x2240001fffff 64bit pref] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e4: resource 0 [io 0xf000-0xffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e4: resource 1 [mem 0xf1400000-0xf17fffff] Jan 23 16:10:19 localhost kernel: pci_bus 0000:e4: resource 2 [mem 0x224000200000-0x2240003fffff 64bit pref] Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 2 Jan 23 16:10:19 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: TCP: Hash tables configured (established 524288 bind 65536) Jan 23 16:10:19 localhost kernel: MPTCP token hash table entries: 65536 (order: 8, 1572864 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc) Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 1 Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 44 Jan 23 16:10:19 localhost kernel: pci 0000:03:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 16:10:19 localhost kernel: PCI: CLS 0 bytes, default 64 Jan 23 16:10:19 localhost kernel: Unpacking initramfs... Jan 23 16:10:19 localhost kernel: Freeing initrd memory: 89856K Jan 23 16:10:19 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 16:10:19 localhost kernel: software IO TLB: mapped [mem 0x000000005aefe000-0x000000005eefe000] (64MB) Jan 23 16:10:19 localhost kernel: ACPI: bus type thunderbolt registered Jan 23 16:10:19 localhost kernel: Initialise system trusted keyrings Jan 23 16:10:19 localhost kernel: Key type blacklist registered Jan 23 16:10:19 localhost kernel: workingset: timestamp_bits=36 max_order=26 bucket_order=0 Jan 23 16:10:19 localhost kernel: zbud: loaded Jan 23 16:10:19 localhost kernel: pstore: using deflate compression Jan 23 16:10:19 localhost kernel: Platform Keyring initialized Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 38 Jan 23 16:10:19 localhost kernel: Key type asymmetric registered Jan 23 16:10:19 localhost kernel: Asymmetric key parser 'x509' registered Jan 23 16:10:19 localhost kernel: Running certificate verification selftests Jan 23 16:10:19 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Jan 23 16:10:19 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) Jan 23 16:10:19 localhost kernel: io scheduler mq-deadline registered Jan 23 16:10:19 localhost kernel: io scheduler kyber registered Jan 23 16:10:19 localhost kernel: io scheduler bfq registered Jan 23 16:10:19 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Jan 23 16:10:19 localhost kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 130 Jan 23 16:10:19 localhost kernel: pcieport 0000:00:1c.4: PME: Signaling with IRQ 131 Jan 23 16:10:19 localhost kernel: pcieport 0000:00:1c.5: PME: Signaling with IRQ 132 Jan 23 16:10:19 localhost kernel: pcieport 0000:30:04.0: PME: Signaling with IRQ 133 Jan 23 16:10:19 localhost kernel: pcieport 0000:64:02.0: PME: Signaling with IRQ 134 Jan 23 16:10:19 localhost kernel: pcieport 0000:64:02.0: pciehp: Slot #169 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep+ (with Cmd Compl erratum) Jan 23 16:10:19 localhost kernel: pcieport 0000:64:03.0: PME: Signaling with IRQ 135 Jan 23 16:10:19 localhost kernel: pcieport 0000:64:03.0: pciehp: Slot #168 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep+ (with Cmd Compl erratum) Jan 23 16:10:19 localhost kernel: pcieport 0000:64:04.0: PME: Signaling with IRQ 136 Jan 23 16:10:19 localhost kernel: pcieport 0000:c9:02.0: PME: Signaling with IRQ 137 Jan 23 16:10:19 localhost kernel: pcieport 0000:e2:02.0: PME: Signaling with IRQ 138 Jan 23 16:10:19 localhost kernel: pcieport 0000:e2:02.0: pciehp: Slot #167 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep+ (with Cmd Compl erratum) Jan 23 16:10:19 localhost kernel: pcieport 0000:e2:03.0: PME: Signaling with IRQ 139 Jan 23 16:10:19 localhost kernel: pcieport 0000:e2:03.0: pciehp: Slot #166 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep+ (with Cmd Compl erratum) Jan 23 16:10:19 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Jan 23 16:10:19 localhost kernel: efifb: probing for efifb Jan 23 16:10:19 localhost kernel: efifb: framebuffer at 0x91000000, using 3072k, total 3072k Jan 23 16:10:19 localhost kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 16:10:19 localhost kernel: efifb: scrolling: redraw Jan 23 16:10:19 localhost kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 16:10:19 localhost kernel: Console: switching to colour frame buffer device 128x48 Jan 23 16:10:19 localhost kernel: fb0: EFI VGA frame buffer device Jan 23 16:10:19 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Jan 23 16:10:19 localhost kernel: ACPI: Power Button [PWRF] Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Memory (0x0 length 0x80000000) Flags:0003 Processor Domain:0 Memory Domain:0 Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Memory (0x100000000 length 0x1f80000000) Flags:0003 Processor Domain:0 Memory Domain:0 Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Memory (0x2080000000 length 0x2000000000) Flags:0003 Processor Domain:1 Memory Domain:1 Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Locality: Flags:00 Type:Read Latency Initiator Domains:2 Target Domains:2 Base:100 Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-0]:7600 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-1]:13560 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-0]:13560 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-1]:7600 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Locality: Flags:00 Type:Write Latency Initiator Domains:2 Target Domains:2 Base:100 Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-0]:7600 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-1]:13560 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-0]:13560 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-1]:7600 nsec Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Locality: Flags:00 Type:Read Bandwidth Initiator Domains:2 Target Domains:2 Base:1 Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-0]:1790 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-1]:1790 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-0]:1790 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-1]:1790 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: HMAT: Locality: Flags:00 Type:Write Bandwidth Initiator Domains:2 Target Domains:2 Base:1 Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-0]:1910 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[0-1]:1910 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-0]:1910 MB/s Jan 23 16:10:19 localhost kernel: acpi/hmat: Initiator-Target[1-1]:1910 MB/s Jan 23 16:10:19 localhost kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 23 16:10:19 localhost kernel: pstore: Registered erst as persistent store backend Jan 23 16:10:19 localhost kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 23 16:10:19 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 16:10:19 localhost kernel: 00:02: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 16:10:19 localhost kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 16:10:19 localhost kernel: Non-volatile memory driver v1.3 Jan 23 16:10:19 localhost kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFC, rev-id 1) Jan 23 16:10:19 localhost kernel: rdac: device handler registered Jan 23 16:10:19 localhost kernel: hp_sw: device handler registered Jan 23 16:10:19 localhost kernel: emc: device handler registered Jan 23 16:10:19 localhost kernel: alua: device handler registered Jan 23 16:10:19 localhost kernel: libphy: Fixed MDIO Bus: probed Jan 23 16:10:19 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Jan 23 16:10:19 localhost kernel: ehci-pci: EHCI PCI platform driver Jan 23 16:10:19 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Jan 23 16:10:19 localhost kernel: ohci-pci: OHCI PCI platform driver Jan 23 16:10:19 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x100 quirks 0x0000000000009810 Jan 23 16:10:19 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 4.18 Jan 23 16:10:19 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jan 23 16:10:19 localhost kernel: usb usb1: Product: xHCI Host Controller Jan 23 16:10:19 localhost kernel: usb usb1: Manufacturer: Linux 4.18.0-372.40.1.el8_6.x86_64 xhci-hcd Jan 23 16:10:19 localhost kernel: usb usb1: SerialNumber: 0000:00:14.0 Jan 23 16:10:19 localhost kernel: hub 1-0:1.0: USB hub found Jan 23 16:10:19 localhost kernel: hub 1-0:1.0: 16 ports detected Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 23 16:10:19 localhost kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.0 SuperSpeed Jan 23 16:10:19 localhost kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 4.18 Jan 23 16:10:19 localhost kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jan 23 16:10:19 localhost kernel: usb usb2: Product: xHCI Host Controller Jan 23 16:10:19 localhost kernel: usb usb2: Manufacturer: Linux 4.18.0-372.40.1.el8_6.x86_64 xhci-hcd Jan 23 16:10:19 localhost kernel: usb usb2: SerialNumber: 0000:00:14.0 Jan 23 16:10:19 localhost kernel: hub 2-0:1.0: USB hub found Jan 23 16:10:19 localhost kernel: hub 2-0:1.0: 10 ports detected Jan 23 16:10:19 localhost kernel: usb: port power management may be unreliable Jan 23 16:10:19 localhost kernel: usbcore: registered new interface driver usbserial_generic Jan 23 16:10:19 localhost kernel: usbserial: USB Serial support registered for generic Jan 23 16:10:19 localhost kernel: i8042: PNP: No PS/2 controller found. Jan 23 16:10:19 localhost kernel: mousedev: PS/2 mouse device common for all mice Jan 23 16:10:19 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 16:10:19 localhost kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 16:10:19 localhost kernel: rtc_cmos 00:00: alarms up to one month, y3k, 114 bytes nvram, hpet irqs Jan 23 16:10:19 localhost kernel: intel_pstate: Intel P-state driver initializing Jan 23 16:10:19 localhost kernel: EFI Variables Facility v0.08 2004-May-17 Jan 23 16:10:19 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 16:10:19 localhost kernel: usbcore: registered new interface driver usbhid Jan 23 16:10:19 localhost kernel: usbhid: USB HID core driver Jan 23 16:10:19 localhost kernel: drop_monitor: Initializing network drop monitor service Jan 23 16:10:19 localhost kernel: Initializing XFRM netlink socket Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 10 Jan 23 16:10:19 localhost kernel: Segment Routing with IPv6 Jan 23 16:10:19 localhost kernel: NET: Registered protocol family 17 Jan 23 16:10:19 localhost kernel: mpls_gso: MPLS GSO support Jan 23 16:10:19 localhost kernel: microcode: sig=0x606a6, pf=0x1, revision=0xd000363 Jan 23 16:10:19 localhost kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 23 16:10:19 localhost kernel: microcode: Microcode Update Driver: v2.2. Jan 23 16:10:19 localhost kernel: resctrl: L3 allocation detected Jan 23 16:10:19 localhost kernel: resctrl: MB allocation detected Jan 23 16:10:19 localhost kernel: resctrl: L3 monitoring detected Jan 23 16:10:19 localhost kernel: AVX2 version of gcm_enc/dec engaged. Jan 23 16:10:19 localhost kernel: AES CTR mode by8 optimization enabled Jan 23 16:10:19 localhost kernel: usb 1-14: New USB device found, idVendor=1604, idProduct=10c0, bcdDevice= 0.00 Jan 23 16:10:19 localhost kernel: usb 1-14: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Jan 23 16:10:19 localhost kernel: hub 1-14:1.0: USB hub found Jan 23 16:10:19 localhost kernel: hub 1-14:1.0: 4 ports detected Jan 23 16:10:19 localhost kernel: sched_clock: Marking stable (4004603453, 0)->(5349845951, -1345242498) Jan 23 16:10:19 localhost kernel: registered taskstats version 1 Jan 23 16:10:19 localhost kernel: Loading compiled-in X.509 certificates Jan 23 16:10:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: 136e352dd63591869158b7e780cf7f1133502089' Jan 23 16:10:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Jan 23 16:10:19 localhost kernel: tsc: Refined TSC clocksource calibration: 2194.826 MHz Jan 23 16:10:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Jan 23 16:10:19 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1fa31b02e62, max_idle_ns: 440795242986 ns Jan 23 16:10:19 localhost kernel: zswap: loaded using pool lzo/zbud Jan 23 16:10:19 localhost kernel: clocksource: Switched to clocksource tsc Jan 23 16:10:19 localhost kernel: page_owner is disabled Jan 23 16:10:19 localhost kernel: Key type big_key registered Jan 23 16:10:19 localhost kernel: Key type trusted registered Jan 23 16:10:19 localhost kernel: Key type encrypted registered Jan 23 16:10:19 localhost kernel: integrity: Loading X.509 certificate: UEFI:db Jan 23 16:10:19 localhost kernel: integrity: Loaded X.509 cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4' Jan 23 16:10:19 localhost kernel: integrity: Loading X.509 certificate: UEFI:db Jan 23 16:10:19 localhost kernel: integrity: Loaded X.509 cert 'Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53' Jan 23 16:10:19 localhost kernel: integrity: Loading X.509 certificate: UEFI:db Jan 23 16:10:19 localhost kernel: integrity: Loaded X.509 cert 'VMware, Inc.: 4ad8ba0472073d28127706ddc6ccb9050441bbc7' Jan 23 16:10:19 localhost kernel: integrity: Loading X.509 certificate: UEFI:db Jan 23 16:10:19 localhost kernel: integrity: Loaded X.509 cert 'VMware, Inc.: VMware Secure Boot Signing: 04597f3e1ffb240bba0ff0f05d5eb05f3e15f6d7' Jan 23 16:10:19 localhost kernel: integrity: Loading X.509 certificate: UEFI:MokListRT (MOKvar table) Jan 23 16:10:19 localhost kernel: integrity: Loaded X.509 cert 'Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f' Jan 23 16:10:19 localhost kernel: ima: Allocated hash algorithm: sha256 Jan 23 16:10:19 localhost kernel: ima: No architecture policies found Jan 23 16:10:19 localhost kernel: evm: Initialising EVM extended attributes: Jan 23 16:10:19 localhost kernel: evm: security.selinux Jan 23 16:10:19 localhost kernel: evm: security.ima Jan 23 16:10:19 localhost kernel: evm: security.capability Jan 23 16:10:19 localhost kernel: evm: HMAC attrs: 0x1 Jan 23 16:10:19 localhost kernel: rtc_cmos 00:00: setting system clock to 2023-01-23 16:10:18 UTC (1674490218) Jan 23 16:10:19 localhost kernel: Freeing unused decrypted memory: 2036K Jan 23 16:10:19 localhost kernel: Freeing unused kernel image (initmem) memory: 2540K Jan 23 16:10:19 localhost kernel: Write protecting the kernel read-only data: 24576k Jan 23 16:10:19 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2012K Jan 23 16:10:19 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 1948K Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 6 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: usb 1-14.1: new high-speed USB device number 3 using xhci_hcd Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 3 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: usb 1-14.1: New USB device found, idVendor=1604, idProduct=10c0, bcdDevice= 0.00 Jan 23 16:10:19 localhost kernel: usb 1-14.1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Jan 23 16:10:19 localhost kernel: hub 1-14.1:1.0: USB hub found Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: hub 1-14.1:1.0: 4 ports detected Jan 23 16:10:19 localhost kernel: usb 1-14.4: new high-speed USB device number 4 using xhci_hcd Jan 23 16:10:19 localhost kernel: fuse: init (API version 7.33) Jan 23 16:10:19 localhost kernel: IPMI message handler: version 39.2 Jan 23 16:10:19 localhost kernel: ipmi device interface Jan 23 16:10:19 localhost kernel: Loading iSCSI transport class v2.0-870. Jan 23 16:10:19 localhost kernel: usb 1-14.4: New USB device found, idVendor=1604, idProduct=10c0, bcdDevice= 0.00 Jan 23 16:10:19 localhost kernel: usb 1-14.4: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Jan 23 16:10:19 localhost kernel: hub 1-14.4:1.0: USB hub found Jan 23 16:10:19 localhost kernel: hub 1-14.4:1.0: 4 ports detected Jan 23 16:10:19 localhost systemd-journald[1195]: Journal started -- Subject: The journal has been started -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The system journal process has started up, opened the journal -- files for writing and is now ready to process requests. Jan 23 16:10:19 localhost systemd-journald[1195]: Runtime journal (/run/log/journal/50165799c731455c915958084befad47) is 8.0M, max 4.0G, 3.9G free. -- Subject: Disk space used by the journal -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Runtime journal (/run/log/journal/50165799c731455c915958084befad47) is currently using 8.0M. -- Maximum allowed usage is set to 4.0G. -- Leaving at least 4.0G free (of currently available 125.5G of disk space). -- Enforced usage limit is thus 4.0G, of which 3.9G are still available. -- -- The limits controlling how much disk space is used by the journal may -- be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, -- RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in -- /etc/systemd/journald.conf. See journald.conf(5) for details. Jan 23 16:10:19 localhost systemd-modules-load[1159]: Inserted module 'fuse' Jan 23 16:10:19 localhost systemd-modules-load[1159]: Module 'msr' is builtin Jan 23 16:10:19 localhost systemd-modules-load[1159]: Inserted module 'ipmi_devintf' Jan 23 16:10:19 localhost dracut-cmdline[1198]: dracut-412.86.202301061548-0 dracut-049-203.git20220511.el8_6 Jan 23 16:10:19 localhost dracut-cmdline[1198]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/vmlinuz-4.18.0-372.40.1.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/ed0ebe724eacc0e94bd1c86924b8e4057fafb13f722aa9acd962a4499dd06fc0/0 ip=dhcp root=UUID=b7d7393a-4ab5-4434-a099-e66267f4b07d rw rootflags=prjquota boot=UUID=6b5eaf26-520d-4e42-90f4-4869c15c705f Jan 23 16:10:19 localhost systemd[1]: Started Create Static Device Nodes in /dev. -- Subject: Unit systemd-tmpfiles-setup-dev.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup-dev.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 8 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: iscsi: registered transport (tcp) Jan 23 16:10:19 localhost kernel: iscsi: registered transport (qla4xxx) Jan 23 16:10:19 localhost kernel: QLogic iSCSI HBA Driver Jan 23 16:10:19 localhost kernel: libcxgbi:libcxgbi_init_module: Chelsio iSCSI driver library libcxgbi v0.9.1-ko (Apr. 2015) Jan 23 16:10:19 localhost kernel: Chelsio T4-T6 iSCSI Driver cxgb4i v0.9.5-ko (Apr. 2015) Jan 23 16:10:19 localhost kernel: iscsi: registered transport (cxgb4i) Jan 23 16:10:19 localhost kernel: cnic: QLogic cnicDriver v2.5.22 (July 20, 2015) Jan 23 16:10:19 localhost kernel: QLogic NetXtreme II iSCSI Driver bnx2i v2.7.10.1 (Jul 16, 2014) Jan 23 16:10:19 localhost kernel: iscsi: registered transport (bnx2i) Jan 23 16:10:19 localhost kernel: iscsi: registered transport (be2iscsi) Jan 23 16:10:19 localhost kernel: In beiscsi_module_init, tt=000000005e3730a2 Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: new high-speed USB device number 5 using xhci_hcd Jan 23 16:10:19 localhost systemd[1]: Started dracut cmdline hook. -- Subject: Unit dracut-cmdline.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-cmdline.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:19 localhost systemd[1]: Starting dracut pre-udev hook... -- Subject: Unit dracut-pre-udev.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-udev.service has begun starting up. Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 2 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: New USB device found, idVendor=0624, idProduct=0249, bcdDevice= 0.00 Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: New USB device strings: Mfr=4, Product=5, SerialNumber=6 Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: Product: Keyboard/Mouse Function Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: Manufacturer: Avocent Jan 23 16:10:19 localhost kernel: usb 1-14.1.1: SerialNumber: 20180726 Jan 23 16:10:19 localhost kernel: input: Avocent Keyboard/Mouse Function as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1.1/1-14.1.1:1.0/0003:0624:0249.0001/input/input1 Jan 23 16:10:19 localhost kernel: device-mapper: uevent: version 1.0.3 Jan 23 16:10:19 localhost kernel: device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com Jan 23 16:10:19 localhost systemd[1]: Started dracut pre-udev hook. -- Subject: Unit dracut-pre-udev.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-udev.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:19 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:19 localhost kernel: hid-generic 0003:0624:0249.0001: input,hidraw0: USB HID v1.00 Keyboard [Avocent Keyboard/Mouse Function] on usb-0000:00:14.0-14.1.1/input0 Jan 23 16:10:19 localhost kernel: input: Avocent Keyboard/Mouse Function as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1.1/1-14.1.1:1.1/0003:0624:0249.0002/input/input2 Jan 23 16:10:19 localhost kernel: hid-generic 0003:0624:0249.0002: input,hidraw1: USB HID v1.00 Mouse [Avocent Keyboard/Mouse Function] on usb-0000:00:14.0-14.1.1/input1 Jan 23 16:10:19 localhost systemd[1]: Starting udev Kernel Device Manager... -- Subject: Unit systemd-udevd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has begun starting up. Jan 23 16:10:20 localhost systemd[1]: Started udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:20 localhost systemd[1]: Starting dracut pre-trigger hook... -- Subject: Unit dracut-pre-trigger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-trigger.service has begun starting up. Jan 23 16:10:20 localhost dracut-pre-trigger[1478]: rd.md=0: removing MD RAID activation Jan 23 16:10:20 localhost systemd[1]: Started dracut pre-trigger hook. -- Subject: Unit dracut-pre-trigger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-trigger.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:20 localhost systemd[1]: Starting udev Coldplug all Devices... -- Subject: Unit systemd-udev-trigger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-trigger.service has begun starting up. Jan 23 16:10:20 localhost systemd[1]: Mounting Kernel Configuration File System... -- Subject: Unit sys-kernel-config.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-kernel-config.mount has begun starting up. Jan 23 16:10:20 localhost systemd[1]: Mounted Kernel Configuration File System. -- Subject: Unit sys-kernel-config.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-kernel-config.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:10:20 localhost systemd[1]: Started udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-trigger.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:20 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... -- Subject: Unit systemd-udev-settle.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-settle.service has begun starting up. Jan 23 16:10:20 localhost systemd-journald[1195]: Missed 10 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:20 localhost kernel: megasas: 07.719.03.00-rh1 Jan 23 16:10:20 localhost kernel: libata version 3.00 loaded. Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: BAR:0x0 BAR's base_addr(phys):0x00000000bc400000 mapped virt_addr:0x000000001e3ca7d9 Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: FW now in Ready state Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: 63 bit DMA mask and 63 bit consistent mask Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: firmware supports msix : (128) Jan 23 16:10:20 localhost systemd-udevd[1692]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: requested/available msix 113/113 Jan 23 16:10:20 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: current msix/online cpus : (113/112) Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: RDPQ mode : (enabled) Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: Current firmware supports maximum commands: 4077 LDIO threshold: 0 Jan 23 16:10:20 localhost kernel: ice: Intel(R) Ethernet Connection E800 Series Linux Driver Jan 23 16:10:20 localhost kernel: ice: Copyright (c) 2018, Intel Corporation. Jan 23 16:10:20 localhost kernel: tg3 0000:04:00.0 eth0: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address b0:7b:25:de:1a:bc Jan 23 16:10:20 localhost kernel: tg3 0000:04:00.0 eth0: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) Jan 23 16:10:20 localhost kernel: tg3 0000:04:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] Jan 23 16:10:20 localhost kernel: tg3 0000:04:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: firmware version: 22.31.1014 Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link) Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: Performance mode :Latency (latency index = 1) Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: FW supports sync cache : Yes Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009 Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: FW provided supportMaxExtLDs: 1 max_lds: 64 Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: controller type : MR(4096MB) Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: Online Controller Reset(OCR) : Enabled Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: Secure JBOD support : No Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: NVMe passthru support : Yes Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: FW provided TM TaskAbort/Reset timeout : 6 secs/60 secs Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: JBOD sequence map support : Yes Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps Jan 23 16:10:20 localhost kernel: megaraid_sas 0000:67:00.0: PCI Lane Margining support : No Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: Port module event: module 0, Cable unplugged Jan 23 16:10:20 localhost kernel: mlx5_core 0000:ca:00.0: mlx5_pcie_event:290:(pid 8): PCIe slot power capability was not advertised. Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: firmware version: 22.31.1014 Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: NVME page size : (4096) Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link) Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000 Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: INIT adapter done Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: Snap dump wait time : 15 Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: pci id : (0x1000)/(0x0014)/(0x1028)/(0x1f3b) Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: unevenspan support : yes Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: firmware crash dump : no Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: JBOD sequence map : enabled Jan 23 16:10:21 localhost kernel: megaraid_sas 0000:67:00.0: Max firmware commands: 4076 shared with nr_hw_queues = 112 Jan 23 16:10:21 localhost kernel: scsi host0: Avago SAS based MegaRAID driver Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ. Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.26.0 Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: Port module event: module 1, Cable unplugged Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: mlx5_pcie_event:290:(pid 904): PCIe slot power capability was not advertised. Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: PTP init successful Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: DCB is enabled in the hardware, max number of TCs supported on this port are 8 Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: FW LLDP is disabled, DCBx/LLDP in SW mode. Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: Commit DCB Configuration to the hardware Jan 23 16:10:21 localhost kernel: ice 0000:31:00.0: 126.024 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x8 link) Jan 23 16:10:21 localhost kernel: ahci 0000:00:11.5: version 3.0 Jan 23 16:10:21 localhost kernel: ahci 0000:00:11.5: AHCI 0001.0301 32 slots 2 ports 6 Gbps 0x3 impl SATA mode Jan 23 16:10:21 localhost kernel: ahci 0000:00:11.5: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds apst Jan 23 16:10:21 localhost kernel: scsi host1: ahci Jan 23 16:10:21 localhost kernel: scsi host2: ahci Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jan 23 16:10:21 localhost kernel: ata1: SATA max UDMA/133 abar m524288@0x92b80000 port 0x92b80100 irq 614 Jan 23 16:10:21 localhost kernel: ata2: SATA max UDMA/133 abar m524288@0x92b80000 port 0x92b80180 irq 614 Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.1 eth3: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address b0:7b:25:de:1a:bd Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.1 eth3: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.1 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.1 eth3: dma_rwctrl[00000001] dma_mask[64-bit] Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ. Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: DDP package already present on device: ICE OS Default Package version 1.3.26.0 Jan 23 16:10:21 localhost systemd-udevd[1674]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:21 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.1 eno8403: renamed from eth3 Jan 23 16:10:21 localhost kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 23 16:10:21 localhost kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds apst Jan 23 16:10:21 localhost systemd-udevd[1632]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:21 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:21 localhost kernel: scsi host3: ahci Jan 23 16:10:21 localhost systemd-udevd[1632]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:21 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:21 localhost kernel: scsi host4: ahci Jan 23 16:10:21 localhost systemd-udevd[1674]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:21 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:21 localhost kernel: tg3 0000:04:00.0 eno8303: renamed from eth0 Jan 23 16:10:21 localhost systemd-udevd[1589]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: PTP init successful Jan 23 16:10:21 localhost kernel: scsi host5: ahci Jan 23 16:10:21 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:21 localhost kernel: scsi host6: ahci Jan 23 16:10:21 localhost kernel: mlx5_core 0000:ca:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jan 23 16:10:21 localhost kernel: scsi host7: ahci Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: DCB is enabled in the hardware, max number of TCs supported on this port are 8 Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: FW LLDP is disabled, DCBx/LLDP in SW mode. Jan 23 16:10:21 localhost kernel: ata1: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:21 localhost kernel: ice 0000:31:00.1: Commit DCB Configuration to the hardware Jan 23 16:10:21 localhost kernel: ata2: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:21 localhost kernel: scsi host8: ahci Jan 23 16:10:22 localhost kernel: ice 0000:31:00.1: 126.024 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x8 link) Jan 23 16:10:22 localhost kernel: scsi host9: ahci Jan 23 16:10:22 localhost systemd-udevd[1589]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:22 localhost kernel: scsi host10: ahci Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: ice 0000:31:00.0 eno12399: renamed from eth1 Jan 23 16:10:22 localhost kernel: ata3: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00100 irq 717 Jan 23 16:10:22 localhost kernel: ata4: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00180 irq 717 Jan 23 16:10:22 localhost kernel: ata5: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00200 irq 717 Jan 23 16:10:22 localhost kernel: ata6: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00280 irq 717 Jan 23 16:10:22 localhost kernel: ata7: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00300 irq 717 Jan 23 16:10:22 localhost kernel: i40e: Intel(R) Ethernet Connection XL710 Network Driver Jan 23 16:10:22 localhost kernel: ata8: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00380 irq 717 Jan 23 16:10:22 localhost kernel: i40e: Copyright (c) 2013 - 2019 Intel Corporation. Jan 23 16:10:22 localhost kernel: ata9: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00400 irq 717 Jan 23 16:10:22 localhost kernel: ata10: SATA max UDMA/133 abar m524288@0x92b00000 port 0x92b00480 irq 717 Jan 23 16:10:22 localhost systemd-udevd[1674]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: ice 0000:31:00.1 eno12409: renamed from eth3 Jan 23 16:10:22 localhost systemd-udevd[1852]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:22 localhost systemd-udevd[1813]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: mlx5_core 0000:ca:00.1 ens2f1: renamed from eth0 Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: scsi 0:2:0:0: Direct-Access DELL PERC H745 Frnt 5.16 PQ: 0 ANSI: 5 Jan 23 16:10:22 localhost systemd-udevd[1813]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: scsi 0:2:1:0: Direct-Access DELL PERC H745 Frnt 5.16 PQ: 0 ANSI: 5 Jan 23 16:10:22 localhost systemd-udevd[1852]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 16:10:22 localhost kernel: ata7: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:22 localhost kernel: ata8: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:22 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 16:10:22 localhost kernel: ata9: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:22 localhost systemd[1]: Created slice system-rdma\x2dload\x2dmodules.slice. -- Subject: Unit system-rdma\x2dload\x2dmodules.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit system-rdma\x2dload\x2dmodules.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 16:10:22 localhost kernel: ata10: SATA link down (SStatus 4 SControl 300) Jan 23 16:10:22 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 16:10:22 localhost kernel: mlx5_core 0000:ca:00.0 ens2f0: renamed from eth2 Jan 23 16:10:22 localhost systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/rdma.conf... -- Subject: Unit rdma-load-modules@rdma.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@rdma.service has begun starting up. Jan 23 16:10:22 localhost systemd-udevd[1834]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:22 localhost systemd-udevd[1574]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:22 localhost systemd-udevd[1827]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:10:22 localhost systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/roce.conf... -- Subject: Unit rdma-load-modules@roce.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@roce.service has begun starting up. Jan 23 16:10:22 localhost systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/infiniband.conf... -- Subject: Unit rdma-load-modules@infiniband.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@infiniband.service has begun starting up. Jan 23 16:10:22 localhost systemd[1]: Started udev Wait for Complete Device Initialization. -- Subject: Unit systemd-udev-settle.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-settle.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd-modules-load[1874]: Inserted module 'ib_iser' Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 7 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: iscsi: registered transport (iser) Jan 23 16:10:22 localhost systemd[1]: Started Load RDMA modules from /etc/rdma/modules/roce.conf. -- Subject: Unit rdma-load-modules@roce.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@roce.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd-journald[1195]: Missed 2 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:22 localhost kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288 Jan 23 16:10:22 localhost kernel: db_root: cannot open: /etc/target Jan 23 16:10:22 localhost systemd-modules-load[1944]: Inserted module 'ib_ipoib' Jan 23 16:10:22 localhost systemd-modules-load[1944]: Inserted module 'ib_umad' Jan 23 16:10:22 localhost systemd[1]: Started Load RDMA modules from /etc/rdma/modules/infiniband.conf. -- Subject: Unit rdma-load-modules@infiniband.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@infiniband.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd-modules-load[1874]: Inserted module 'ib_isert' Jan 23 16:10:22 localhost systemd-modules-load[1874]: Inserted module 'ib_srpt' Jan 23 16:10:22 localhost systemd-modules-load[1874]: Inserted module 'rdma_ucm' Jan 23 16:10:22 localhost systemd-modules-load[1874]: Failed to find module 'xprtrdma' Jan 23 16:10:22 localhost systemd-modules-load[1874]: Failed to find module 'svcrdma' Jan 23 16:10:22 localhost systemd[1]: rdma-load-modules@rdma.service: Main process exited, code=exited, status=1/FAILURE Jan 23 16:10:22 localhost systemd[1]: rdma-load-modules@rdma.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit rdma-load-modules@rdma.service has entered the 'failed' state with result 'exit-code'. Jan 23 16:10:22 localhost systemd[1]: Failed to start Load RDMA modules from /etc/rdma/modules/rdma.conf. -- Subject: Unit rdma-load-modules@rdma.service has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@rdma.service has failed. -- -- The result is failed. Jan 23 16:10:22 localhost systemd[1]: Reached target Network (Pre). -- Subject: Unit network-pre.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit network-pre.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd[1]: Reached target RDMA Hardware. -- Subject: Unit rdma-hw.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-hw.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:22 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... -- Subject: Unit multipathd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit multipathd.service has begun starting up. Jan 23 16:10:23 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. -- Subject: Unit multipathd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit multipathd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Starting Open-iSCSI... -- Subject: Unit iscsid.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsid.service has begun starting up. Jan 23 16:10:23 localhost multipathd[1981]: --------start up-------- Jan 23 16:10:23 localhost multipathd[1981]: read /etc/multipath.conf Jan 23 16:10:23 localhost multipathd[1981]: /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 16:10:23 localhost multipathd[1981]: You can run "/sbin/mpathconf --enable" to create Jan 23 16:10:23 localhost multipathd[1981]: /etc/multipath.conf. See man mpathconf(8) for more details Jan 23 16:10:23 localhost multipathd[1981]: path checkers start up Jan 23 16:10:23 localhost multipathd[1981]: /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 16:10:23 localhost multipathd[1981]: You can run "/sbin/mpathconf --enable" to create Jan 23 16:10:23 localhost multipathd[1981]: /etc/multipath.conf. See man mpathconf(8) for more details Jan 23 16:10:23 localhost iscsid[1982]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jan 23 16:10:23 localhost iscsid[1982]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jan 23 16:10:23 localhost iscsid[1982]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jan 23 16:10:23 localhost iscsid[1982]: If using hardware iscsi like qla4xxx this message can be ignored. Jan 23 16:10:23 localhost iscsid[1982]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jan 23 16:10:23 localhost systemd[1]: Reached target Local File Systems (Pre). -- Subject: Unit local-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs-pre.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Reached target Local File Systems. -- Subject: Unit local-fs.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Starting Create Volatile Files and Directories... -- Subject: Unit systemd-tmpfiles-setup.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup.service has begun starting up. Jan 23 16:10:23 localhost systemd[1]: Started Open-iSCSI. -- Subject: Unit iscsid.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsid.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Started Create Volatile Files and Directories. -- Subject: Unit systemd-tmpfiles-setup.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup.service has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Reached target System Initialization. -- Subject: Unit sysinit.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sysinit.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Reached target Basic System. -- Subject: Unit basic.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit basic.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:23 localhost systemd[1]: Starting dracut initqueue hook... -- Subject: Unit dracut-initqueue.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-initqueue.service has begun starting up. Jan 23 16:10:23 localhost systemd-journald[1195]: Missed 38 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost kernel: scsi 0:2:0:0: Attached scsi generic sg0 type 0 Jan 23 16:10:23 localhost kernel: scsi 0:2:1:0: Attached scsi generic sg1 type 0 Jan 23 16:10:23 localhost NetworkManager[2007]: [1674490223.6362] NetworkManager (version 1.36.0-11.el8_6) is starting... (for the first time) Jan 23 16:10:23 localhost NetworkManager[2007]: [1674490223.6362] Read config: /etc/NetworkManager/NetworkManager.conf Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6622] auth[0x556141ea42a0]: create auth-manager: D-Bus connection not available. Polkit is disabled and only root will be authorized. Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6627] manager[0x556141edb020]: monitoring kernel firmware directory '/lib/firmware'. Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 4 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] 936640512 512-byte logical blocks: (480 GB/447 GiB) Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] 936640512 512-byte logical blocks: (480 GB/447 GiB) Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] Write Protect is off Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] Mode Sense: 1f 00 00 08 Jan 23 16:10:23 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno12399: link is not ready Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] Optimal transfer size 262144 bytes Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6628] hostname: hostname: hostnamed not used as proxy creation failed with: Could not connect: No such file or directory Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:1:0: [sdb] Attached SCSI disk Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] Write Protect is off Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6629] dns-mgr[0x556141ed0120]: init: dns=default,systemd-resolved rc-manager=symlink Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] Mode Sense: 1f 00 00 08 Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6629] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6951] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-11.el8_6/libnm-device-plugin-team.so) Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] Optimal transfer size 262144 bytes Jan 23 16:10:23 localhost.localdomain kernel: ice 0000:31:00.0 eno12399: NIC Link is up 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg Advertised: Off, Autoneg Negotiated: False, Flow Control: None Jan 23 16:10:23 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno12399: link is not ready Jan 23 16:10:23 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eno12399: link becomes ready Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6951] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6951] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6951] manager: Networking is enabled by state file Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6953] ifcfg-rh: dbus: don't use D-Bus for com.redhat.ifcfgrh1 service Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6953] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-11.el8_6/libnm-settings-plugin-ifcfg-rh.so") Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6954] settings: Loaded settings plugin: keyfile (internal) Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6960] dhcp-init: Using DHCP client 'internal' Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6960] device (lo): carrier: link connected Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6960] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6962] manager: (eno12399): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.6962] device (eno12399): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 11 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: sda: sda1 sda2 sda3 sda4 Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.8563] device (eno12399): carrier: link connected Jan 23 16:10:23 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:23 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno12409: link is not ready Jan 23 16:10:23 localhost.localdomain kernel: sd 0:2:0:0: [sda] Attached SCSI disk Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.8570] manager: (eno12409): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: ice 0000:31:00.1 eno12409: NIC Link is up 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg Advertised: Off, Autoneg Negotiated: False, Flow Control: None Jan 23 16:10:23 localhost.localdomain NetworkManager[2007]: [1674490223.8571] device (eno12409): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno12409: link is not ready Jan 23 16:10:23 localhost.localdomain systemd[1]: Found device PERC_H745_Frnt root. -- Subject: Unit dev-disk-by\x2duuid-b7d7393a\x2d4ab5\x2d4434\x2da099\x2de66267f4b07d.device has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dev-disk-by\x2duuid-b7d7393a\x2d4ab5\x2d4434\x2da099\x2de66267f4b07d.device has finished starting up. -- -- The start-up result is done. Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno8303: link is not ready Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.1216] device (eno12409): carrier: link connected Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.1227] manager: (eno8303): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.1228] device (eno8303): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:24 localhost.localdomain systemd[1]: Found device PERC_H745_Frnt root. -- Subject: Unit dev-disk-by\x2dlabel-root.device has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dev-disk-by\x2dlabel-root.device has finished starting up. -- -- The start-up result is done. Jan 23 16:10:24 localhost.localdomain systemd[1]: Reached target Initrd Root Device. -- Subject: Unit initrd-root-device.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-root-device.target has finished starting up. -- -- The start-up result is done. Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.5337] manager: (eno8403): new Ethernet device (/org/freedesktop/NetworkManager/Devices/5) Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 5 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno8303: link is not ready Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno8403: link is not ready Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.5337] device (eno8403): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:24 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eno8403: link is not ready Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.7371] manager: (ens2f0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/6) Jan 23 16:10:24 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f0: link is not ready Jan 23 16:10:24 localhost.localdomain NetworkManager[2007]: [1674490224.7371] device (ens2f0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:25 localhost.localdomain systemd-journald[1195]: Missed 2 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:25 localhost.localdomain kernel: mlx5_core 0000:ca:00.0 ens2f0: Link down Jan 23 16:10:25 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f0: link is not ready Jan 23 16:10:25 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eno12409: link becomes ready Jan 23 16:10:25 localhost.localdomain NetworkManager[2007]: [1674490225.4586] manager: (ens2f1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7) Jan 23 16:10:25 localhost.localdomain NetworkManager[2007]: [1674490225.4586] device (ens2f1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:10:25 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:25 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f1: link is not ready Jan 23 16:10:26 localhost.localdomain systemd-journald[1195]: Missed 1 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:10:26 localhost.localdomain kernel: mlx5_core 0000:ca:00.1 ens2f1: Link down Jan 23 16:10:26 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f1: link is not ready Jan 23 16:10:26 localhost.localdomain NetworkManager[2007]: [1674490226.3333] sleep-monitor-sd: failed to acquire D-Bus proxy: Could not connect: No such file or directory Jan 23 16:10:26 localhost.localdomain NetworkManager[2007]: [1674490226.3334] device (eno12399): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 localhost.localdomain NetworkManager[2007]: [1674490226.3341] device (eno12409): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 localhost.localdomain NetworkManager[2007]: [1674490226.3350] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:10:26 localhost.localdomain NetworkManager[2007]: [1674490226.3352] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3354] device (eno12399): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3354] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3354] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3354] manager: NetworkManager state is now CONNECTING Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3354] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3357] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3358] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3359] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3365] dhcp4 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3367] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.3373] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.4384] dhcp4 (eno12399): state changed new lease, address=192.168.18.12 Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.4385] policy: set 'Wired Connection' (eno12399) as default for IPv4 routing and DNS Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.4385] policy: set-hostname: set hostname to 'hub-master-0' (from DHCPv4) Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.5146] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.5146] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.5147] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.5147] device (eno12399): Activation: successful, device activated. Jan 23 16:10:26 hub-master-0 NetworkManager[2007]: [1674490226.5148] manager: NetworkManager state is now CONNECTED_GLOBAL Jan 23 16:10:27 hub-master-0 NetworkManager[2007]: [1674490227.9965] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:10:27 hub-master-0 NetworkManager[2007]: [1674490227.9966] policy: set 'Wired Connection' (eno12409) as default for IPv6 routing and DNS Jan 23 16:10:28 hub-master-0 NetworkManager[2007]: [1674490228.4930] dhcp6 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:10:28 hub-master-0 NetworkManager[2007]: [1674490228.4931] policy: set 'Wired Connection' (eno12399) as default for IPv6 routing and DNS Jan 23 16:10:28 hub-master-0 NetworkManager[2007]: [1674490228.4935] dhcp6 (eno12399): state changed new lease, address=2600:52:7:18::23 Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.1178] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.1178] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.1178] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2182] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2183] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2188] manager: startup complete Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2189] quitting now that startup is complete Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2862] dhcp4 (eno12399): canceled DHCP transaction Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2862] dhcp4 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2862] dhcp4 (eno12399): state changed no lease Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2863] dhcp6 (eno12399): canceled DHCP transaction Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2863] dhcp6 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2863] dhcp6 (eno12399): state changed no lease Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2863] manager: NetworkManager state is now CONNECTED_SITE Jan 23 16:11:56 hub-master-0 NetworkManager[2007]: [1674490316.2883] exiting (success) Jan 23 16:11:56 hub-master-0 systemd[1]: Started dracut initqueue hook. -- Subject: Unit dracut-initqueue.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-initqueue.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:56 hub-master-0 systemd[1]: Reached target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit remote-fs-pre.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:56 hub-master-0 systemd[1]: Reached target Remote File Systems. -- Subject: Unit remote-fs.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit remote-fs.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:56 hub-master-0 systemd[1]: Starting dracut pre-mount hook... -- Subject: Unit dracut-pre-mount.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-mount.service has begun starting up. Jan 23 16:11:56 hub-master-0 systemd[1]: Started dracut pre-mount hook. -- Subject: Unit dracut-pre-mount.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-mount.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:56 hub-master-0 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b7d7393a-4ab5-4434-a099-e66267f4b07d... -- Subject: Unit systemd-fsck-root.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-fsck-root.service has begun starting up. Jan 23 16:11:56 hub-master-0 systemd-fsck[2080]: /usr/sbin/fsck.xfs: XFS file system. Jan 23 16:11:56 hub-master-0 systemd[1]: Started File System Check on /dev/disk/by-uuid/b7d7393a-4ab5-4434-a099-e66267f4b07d. -- Subject: Unit systemd-fsck-root.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-fsck-root.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:56 hub-master-0 systemd[1]: Mounting /sysroot... -- Subject: Unit sysroot.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sysroot.mount has begun starting up. Jan 23 16:11:56 hub-master-0 systemd-journald[1195]: Missed 53 kernel messages -- Subject: Journal messages have been missed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Kernel messages have been lost as the journal system has been unable -- to process them quickly enough. Jan 23 16:11:56 hub-master-0 kernel: SGI XFS with ACLs, security attributes, quota, no debug enabled Jan 23 16:11:56 hub-master-0 kernel: XFS (sda4): Mounting V5 Filesystem Jan 23 16:11:56 hub-master-0 kernel: XFS (sda4): Starting recovery (logdev: internal) Jan 23 16:11:57 hub-master-0 kernel: XFS (sda4): Ending recovery (logdev: internal) Jan 23 16:11:57 hub-master-0 systemd[1]: Mounted /sysroot. -- Subject: Unit sysroot.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sysroot.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:11:57 hub-master-0 systemd[1]: Starting OSTree Prepare OS/... -- Subject: Unit ostree-prepare-root.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ostree-prepare-root.service has begun starting up. Jan 23 16:11:57 hub-master-0 ostree-prepare-root[2098]: preparing sysroot at /sysroot Jan 23 16:11:57 hub-master-0 ostree-prepare-root[2098]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/748cb63e77da21963e6a7bf2820c1791ccca0f1d26977d435fc68384c7fc1db4.0 Jan 23 16:11:57 hub-master-0 ostree-prepare-root[2098]: filesystem at /sysroot currently writable: 1 Jan 23 16:11:57 hub-master-0 ostree-prepare-root[2098]: sysroot.readonly configuration value: 1 Jan 23 16:11:57 hub-master-0 systemd[1]: sysroot-ostree-deploy-rhcos-deploy-748cb63e77da21963e6a7bf2820c1791ccca0f1d26977d435fc68384c7fc1db4.0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit sysroot-ostree-deploy-rhcos-deploy-748cb63e77da21963e6a7bf2820c1791ccca0f1d26977d435fc68384c7fc1db4.0.mount has successfully entered the 'dead' state. Jan 23 16:11:57 hub-master-0 systemd[1]: Started OSTree Prepare OS/. -- Subject: Unit ostree-prepare-root.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ostree-prepare-root.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:57 hub-master-0 systemd[1]: Reached target Initrd Root File System. -- Subject: Unit initrd-root-fs.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-root-fs.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:57 hub-master-0 systemd[1]: Starting Reload Configuration from the Real Root... -- Subject: Unit initrd-parse-etc.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-parse-etc.service has begun starting up. Jan 23 16:11:57 hub-master-0 systemd[1]: Reloading. Jan 23 16:11:57 hub-master-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller... -- Subject: Unit multipathd.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit multipathd.service has begun shutting down. Jan 23 16:11:57 hub-master-0 multipathd[1981]: exit (signal) Jan 23 16:11:57 hub-master-0 multipathd[1981]: --------shut down------- Jan 23 16:11:58 hub-master-0 systemd[1]: multipathd.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit multipathd.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller. -- Subject: Unit multipathd.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit multipathd.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: initrd-parse-etc.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit initrd-parse-etc.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Started Reload Configuration from the Real Root. -- Subject: Unit initrd-parse-etc.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-parse-etc.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:58 hub-master-0 systemd[1]: Reached target Initrd File Systems. -- Subject: Unit initrd-fs.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-fs.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:58 hub-master-0 systemd[1]: Reached target Initrd Default Target. -- Subject: Unit initrd.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:58 hub-master-0 systemd[1]: Starting dracut mount hook... -- Subject: Unit dracut-mount.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-mount.service has begun starting up. Jan 23 16:11:58 hub-master-0 systemd[1]: Started dracut mount hook. -- Subject: Unit dracut-mount.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-mount.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:58 hub-master-0 systemd[1]: Starting dracut pre-pivot and cleanup hook... -- Subject: Unit dracut-pre-pivot.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-pivot.service has begun starting up. Jan 23 16:11:58 hub-master-0 dracut-pre-pivot[2211]: Jan 23 16:11:58 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 16:11:58 hub-master-0 dracut-pre-pivot[2211]: Jan 23 16:11:58 | You can run "/sbin/mpathconf --enable" to create Jan 23 16:11:58 hub-master-0 dracut-pre-pivot[2211]: Jan 23 16:11:58 | /etc/multipath.conf. See man mpathconf(8) for more details Jan 23 16:11:58 hub-master-0 systemd[1]: Started dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-pivot.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:58 hub-master-0 systemd[1]: Starting Cleaning Up and Shutting Down Daemons... -- Subject: Unit initrd-cleanup.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-cleanup.service has begun starting up. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Timers. -- Subject: Unit timers.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit timers.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Network (Pre). -- Subject: Unit network-pre.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit network-pre.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target RDMA Hardware. -- Subject: Unit rdma-hw.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-hw.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: dracut-pre-pivot.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-pre-pivot.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-pivot.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Remote File Systems. -- Subject: Unit remote-fs.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit remote-fs.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit remote-fs-pre.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Initrd Default Target. -- Subject: Unit initrd.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Basic System. -- Subject: Unit basic.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit basic.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target System Initialization. -- Subject: Unit sysinit.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sysinit.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: rdma-load-modules@infiniband.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit rdma-load-modules@infiniband.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Load RDMA modules from /etc/rdma/modules/infiniband.conf. -- Subject: Unit rdma-load-modules@infiniband.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@infiniband.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Sockets. -- Subject: Unit sockets.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sockets.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Slices. -- Subject: Unit slices.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit slices.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: coreos-touch-run-agetty.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit coreos-touch-run-agetty.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. -- Subject: Unit coreos-touch-run-agetty.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-touch-run-agetty.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: rdma-load-modules@roce.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit rdma-load-modules@roce.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Load RDMA modules from /etc/rdma/modules/roce.conf. -- Subject: Unit rdma-load-modules@roce.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@roce.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Swap. -- Subject: Unit swap.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit swap.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Paths. -- Subject: Unit paths.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit paths.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: systemd-sysctl.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-sysctl.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Apply Kernel Variables. -- Subject: Unit systemd-sysctl.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-sysctl.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: clevis-luks-askpass.path: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit clevis-luks-askpass.path has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. -- Subject: Unit clevis-luks-askpass.path has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit clevis-luks-askpass.path has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: systemd-udev-settle.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-udev-settle.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped udev Wait for Complete Device Initialization. -- Subject: Unit systemd-udev-settle.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-settle.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: systemd-modules-load.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-modules-load.service has successfully entered the 'dead' state. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped Load Kernel Modules. -- Subject: Unit systemd-modules-load.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-modules-load.service has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Initrd Root Device. -- Subject: Unit initrd-root-device.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-root-device.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. -- Subject: Unit ignition-subsequent.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ignition-subsequent.target has finished shutting down. Jan 23 16:11:58 hub-master-0 systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. -- Subject: Unit ignition-diskful-subsequent.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ignition-diskful-subsequent.target has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-mount.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-mount.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut mount hook. -- Subject: Unit dracut-mount.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-mount.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-tmpfiles-setup.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped Create Volatile Files and Directories. -- Subject: Unit systemd-tmpfiles-setup.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped target Local File Systems. -- Subject: Unit local-fs.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs.target has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped target Local File Systems (Pre). -- Subject: Unit local-fs-pre.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs-pre.target has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-pre-mount.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-pre-mount.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut pre-mount hook. -- Subject: Unit dracut-pre-mount.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-mount.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-initqueue.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-initqueue.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut initqueue hook. -- Subject: Unit dracut-initqueue.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-initqueue.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopping Open-iSCSI... -- Subject: Unit iscsid.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsid.service has begun shutting down. Jan 23 16:11:59 hub-master-0 iscsid[1982]: iscsid shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-udev-trigger.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-udev-trigger.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-trigger.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-pre-trigger.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-pre-trigger.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut pre-trigger hook. -- Subject: Unit dracut-pre-trigger.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-trigger.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopping udev Kernel Device Manager... -- Subject: Unit systemd-udevd.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has begun shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped target Local Encrypted Volumes. -- Subject: Unit cryptsetup.target has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit cryptsetup.target has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-ask-password-console.path: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-ask-password-console.path has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. -- Subject: Unit systemd-ask-password-console.path has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-ask-password-console.path has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-udevd.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-udevd.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: iscsid.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit iscsid.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped Open-iSCSI. -- Subject: Unit iscsid.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsid.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopping iSCSI UserSpace I/O driver... -- Subject: Unit iscsiuio.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsiuio.service has begun shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: iscsid.socket: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit iscsid.socket has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Closed Open-iSCSI iscsid Socket. -- Subject: Unit iscsid.socket has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsid.socket has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-tmpfiles-setup-dev.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped Create Static Device Nodes in /dev. -- Subject: Unit systemd-tmpfiles-setup-dev.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup-dev.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: kmod-static-nodes.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kmod-static-nodes.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped Create list of required static device nodes for the current kernel. -- Subject: Unit kmod-static-nodes.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kmod-static-nodes.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-pre-udev.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-pre-udev.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut pre-udev hook. -- Subject: Unit dracut-pre-udev.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-pre-udev.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: dracut-cmdline.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit dracut-cmdline.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped dracut cmdline hook. -- Subject: Unit dracut-cmdline.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-cmdline.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-udevd-kernel.socket: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-udevd-kernel.socket has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Closed udev Kernel Socket. -- Subject: Unit systemd-udevd-kernel.socket has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd-kernel.socket has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: systemd-udevd-control.socket: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-udevd-control.socket has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Closed udev Control Socket. -- Subject: Unit systemd-udevd-control.socket has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd-control.socket has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Starting Cleanup udevd DB... -- Subject: Unit initrd-udevadm-cleanup-db.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-udevadm-cleanup-db.service has begun starting up. Jan 23 16:11:59 hub-master-0 systemd[1]: iscsiuio.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit iscsiuio.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Stopped iSCSI UserSpace I/O driver. -- Subject: Unit iscsiuio.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsiuio.service has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: initrd-cleanup.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit initrd-cleanup.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Started Cleaning Up and Shutting Down Daemons. -- Subject: Unit initrd-cleanup.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-cleanup.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:59 hub-master-0 systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit initrd-udevadm-cleanup-db.service has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Started Cleanup udevd DB. -- Subject: Unit initrd-udevadm-cleanup-db.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-udevadm-cleanup-db.service has finished starting up. -- -- The start-up result is done. Jan 23 16:11:59 hub-master-0 systemd[1]: Reached target Switch Root. -- Subject: Unit initrd-switch-root.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-switch-root.target has finished starting up. -- -- The start-up result is done. Jan 23 16:11:59 hub-master-0 systemd[1]: Starting Switch Root... -- Subject: Unit initrd-switch-root.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit initrd-switch-root.service has begun starting up. Jan 23 16:11:59 hub-master-0 systemd[1]: iscsiuio.socket: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit iscsiuio.socket has successfully entered the 'dead' state. Jan 23 16:11:59 hub-master-0 systemd[1]: Closed Open-iSCSI iscsiuio Socket. -- Subject: Unit iscsiuio.socket has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit iscsiuio.socket has finished shutting down. Jan 23 16:11:59 hub-master-0 systemd[1]: Switching root. Jan 23 16:11:59 hub-master-0 systemd-journald[1195]: Journal stopped -- Subject: The journal has been stopped -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The system journal process has shut down and closed all currently -- active journal files. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Mounted /sysroot. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting OSTree Prepare OS/... Jan 23 16:12:00 hub-master-0.workload.bos2.lab ostree-prepare-root[2098]: preparing sysroot at /sysroot Jan 23 16:12:00 hub-master-0.workload.bos2.lab ostree-prepare-root[2098]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/748cb63e77da21963e6a7bf2820c1791ccca0f1d26977d435fc68384c7fc1db4.0 Jan 23 16:12:00 hub-master-0.workload.bos2.lab ostree-prepare-root[2098]: filesystem at /sysroot currently writable: 1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab ostree-prepare-root[2098]: sysroot.readonly configuration value: 1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: sysroot-ostree-deploy-rhcos-deploy-748cb63e77da21963e6a7bf2820c1791ccca0f1d26977d435fc68384c7fc1db4.0.mount: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started OSTree Prepare OS/. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Initrd Root File System. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting Reload Configuration from the Real Root... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reloading. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopping Device-Mapper Multipath Device Controller... Jan 23 16:12:00 hub-master-0.workload.bos2.lab multipathd[1981]: exit (signal) Jan 23 16:12:00 hub-master-0.workload.bos2.lab multipathd[1981]: --------shut down------- Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: multipathd.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Device-Mapper Multipath Device Controller. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: initrd-parse-etc.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started Reload Configuration from the Real Root. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Initrd File Systems. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Initrd Default Target. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting dracut mount hook... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started dracut mount hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting dracut pre-pivot and cleanup hook... Jan 23 16:12:00 hub-master-0.workload.bos2.lab dracut-pre-pivot[2211]: Jan 23 16:11:58 | /etc/multipath.conf does not exist, blacklisting all devices. Jan 23 16:12:00 hub-master-0.workload.bos2.lab dracut-pre-pivot[2211]: Jan 23 16:11:58 | You can run "/sbin/mpathconf --enable" to create Jan 23 16:12:00 hub-master-0.workload.bos2.lab dracut-pre-pivot[2211]: Jan 23 16:11:58 | /etc/multipath.conf. See man mpathconf(8) for more details Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started dracut pre-pivot and cleanup hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Timers. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Network (Pre). Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target RDMA Hardware. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-pre-pivot.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut pre-pivot and cleanup hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Remote File Systems. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Remote File Systems (Pre). Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Initrd Default Target. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Basic System. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target System Initialization. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: rdma-load-modules@infiniband.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Load RDMA modules from /etc/rdma/modules/infiniband.conf. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Sockets. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Slices. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: coreos-touch-run-agetty.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: rdma-load-modules@roce.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Load RDMA modules from /etc/rdma/modules/roce.conf. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Swap. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Paths. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-sysctl.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Apply Kernel Variables. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: clevis-luks-askpass.path: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-udev-settle.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped udev Wait for Complete Device Initialization. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-modules-load.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Load Kernel Modules. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Initrd Root Device. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-mount.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut mount hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Create Volatile Files and Directories. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Local File Systems. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Local File Systems (Pre). Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-pre-mount.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut pre-mount hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-initqueue.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut initqueue hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopping Open-iSCSI... Jan 23 16:12:00 hub-master-0.workload.bos2.lab iscsid[1982]: iscsid shutting down. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-udev-trigger.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped udev Coldplug all Devices. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-pre-trigger.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut pre-trigger hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopping udev Kernel Device Manager... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Local Encrypted Volumes. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-ask-password-console.path: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-udevd.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped udev Kernel Device Manager. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: iscsid.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Open-iSCSI. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopping iSCSI UserSpace I/O driver... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: iscsid.socket: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Closed Open-iSCSI iscsid Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Create Static Device Nodes in /dev. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: kmod-static-nodes.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Create list of required static device nodes for the current kernel. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-pre-udev.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut pre-udev hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: dracut-cmdline.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped dracut cmdline hook. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-udevd-kernel.socket: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Closed udev Kernel Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-udevd-control.socket: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Closed udev Control Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting Cleanup udevd DB... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: iscsiuio.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped iSCSI UserSpace I/O driver. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: initrd-cleanup.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started Cleaning Up and Shutting Down Daemons. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started Cleanup udevd DB. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Switch Root. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting Switch Root... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: iscsiuio.socket: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Closed Open-iSCSI iscsiuio Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Switching root. Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: printk: systemd: 29 output lines suppressed due to ratelimiting Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: audit: type=1404 audit(1674490319.911:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability network_peer_controls=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability open_perms=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability extended_socket_class=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability always_check_network=0 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab kernel: audit: type=1403 audit(1674490320.096:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Successfully loaded SELinux policy in 187.042ms. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 24.644ms. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd 239 (239-58.el8_6.9) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Detected architecture x86-64. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Set hostname to . Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-journald.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-journald.service: Consumed 0 CPU time Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: initrd-switch-root.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Switch Root. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: initrd-switch-root.service: Consumed 0 CPU time Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped Journal Service. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-journald.service: Consumed 0 CPU time Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Starting Journal Service... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on udev Control Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Mounting POSIX Message Queue File System... Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on Device-mapper event daemon FIFOs. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd-journald[2274]: Journal started -- Subject: The journal has been started -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The system journal process has started up, opened the journal -- files for writing and is now ready to process requests. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd-journald[2274]: Runtime journal (/run/log/journal/2d1c3c5e56e644be8e0c86aaa5d61f6a) is 8.0M, max 4.0G, 3.9G free. -- Subject: Disk space used by the journal -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Runtime journal (/run/log/journal/2d1c3c5e56e644be8e0c86aaa5d61f6a) is currently using 8.0M. -- Maximum allowed usage is set to 4.0G. -- Leaving at least 4.0G free (of currently available 125.5G of disk space). -- Enforced usage limit is thus 4.0G, of which 3.9G are still available. -- -- The limits controlling how much disk space is used by the journal may -- be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, -- RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in -- /etc/systemd/journald.conf. See journald.conf(5) for details. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on LVM2 poll daemon socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-fsck-root.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped File System Check on Root Device. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: systemd-fsck-root.service: Consumed 0 CPU time Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Local Encrypted Volumes (Pre). Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Switch Root. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Initrd Root File System. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: ostree-prepare-root.service: Succeeded. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Stopped OSTree Prepare OS/. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: ostree-prepare-root.service: Consumed 0 CPU time Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Created slice system-getty.slice. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Remote Encrypted Volumes. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Swap. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on initctl Compatibility Named Pipe. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on Process Core Dump Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target Host and Network Name Lookups. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Listening on RPCbind Server Activation Socket. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Reached target RPC Port Mapper. Jan 23 16:12:00 hub-master-0.workload.bos2.lab systemd[1]: Created slice system-sshd\x2dkeygen.slice. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounting Kernel Debug File System... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounting Temporary Directory (/tmp)... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target Remote File Systems. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Forward Password Requests to Wall Directory Watch. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Create list of required static device nodes for the current kernel... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Created slice User and Session Slice. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target Slices. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounting Huge Pages File System... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Load Kernel Modules... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-modules-load[2288]: Module 'msr' is builtin Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Created slice system-systemd\x2dfsck.slice. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-modules-load[2288]: Inserted module 'ip_tables' Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target Local Encrypted Volumes. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Listening on udev Kernel Socket. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting udev Coldplug all Devices... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Stopped target Initrd File Systems. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Consumed 0 CPU time Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-sysroot.mount: Succeeded. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-sysroot.mount: Consumed 0 CPU time Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-usr.mount: Succeeded. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-usr.mount: Consumed 0 CPU time Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-etc.mount: Succeeded. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: sysroot-etc.mount: Consumed 0 CPU time Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Journal Service. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounted POSIX Message Queue File System. -- Subject: Unit dev-mqueue.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dev-mqueue.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounted Kernel Debug File System. -- Subject: Unit sys-kernel-debug.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-kernel-debug.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounted Temporary Directory (/tmp). -- Subject: Unit tmp.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit tmp.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. -- Subject: Unit lvm2-monitor.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit lvm2-monitor.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Create list of required static device nodes for the current kernel. -- Subject: Unit kmod-static-nodes.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kmod-static-nodes.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounted Huge Pages File System. -- Subject: Unit dev-hugepages.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dev-hugepages.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Load Kernel Modules. -- Subject: Unit systemd-modules-load.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-modules-load.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started CoreOS: Set printk To Level 4 (warn). -- Subject: Unit coreos-printk-quiet.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-printk-quiet.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounting FUSE Control File System... -- Subject: Unit sys-fs-fuse-connections.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-fs-fuse-connections.mount has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Apply Kernel Variables... -- Subject: Unit systemd-sysctl.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-sysctl.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Create Static Device Nodes in /dev... -- Subject: Unit systemd-tmpfiles-setup-dev.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup-dev.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Mounted FUSE Control File System. -- Subject: Unit sys-fs-fuse-connections.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-fs-fuse-connections.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Apply Kernel Variables. -- Subject: Unit systemd-sysctl.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-sysctl.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Create Static Device Nodes in /dev. -- Subject: Unit systemd-tmpfiles-setup-dev.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup-dev.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting udev Kernel Device Manager... -- Subject: Unit systemd-udevd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udevd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-trigger.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting udev Wait for Complete Device Initialization... -- Subject: Unit systemd-udev-settle.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-settle.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: wmi_bus wmi_bus-PNP0C14:00: WQBC data block query control method not found Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2416]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2420]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2426]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2498]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si: IPMI System Interface driver Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca8 regsize 1 spacing 4 irq 10 Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca8] regsize 1 spacing 4 irq 10 Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2416]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2498]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2426]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2420]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ACPI Error: No handler for Region [SYSI] (000000007306ea02) [IPMI] (20210604/evregion-135) Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/rdma.conf... -- Subject: Unit rdma-load-modules@rdma.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@rdma.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2368]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2375]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2375]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ACPI Error: Region IPMI (ID=7) has no handler (20210604/exfldio-265) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ACPI Error: Aborting method \_SB.PMI0._GHL due to previous error (AE_NOT_EXIST) (20210604/psparse-531) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ACPI Error: Aborting method \_SB.PMI0._PMC due to previous error (AE_NOT_EXIST) (20210604/psparse-531) Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2368]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ACPI Error: AE_NOT_EXIST, Evaluating _PMC (20210604/power_meter-759) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: input: PC Speaker as /devices/platform/pcspkr/input/input3 Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting RDMA Node Description Daemon... -- Subject: Unit rdma-ndd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-ndd.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca8, slave address 0x20, irq 10 Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started RDMA Node Description Daemon. -- Subject: Unit rdma-ndd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-ndd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/roce.conf... -- Subject: Unit rdma-load-modules@roce.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@roce.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Starting Load RDMA modules from /etc/rdma/modules/infiniband.conf... -- Subject: Unit rdma-load-modules@infiniband.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@infiniband.service has begun starting up. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Load RDMA modules from /etc/rdma/modules/roce.conf. -- Subject: Unit rdma-load-modules@roce.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@roce.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Load RDMA modules from /etc/rdma/modules/infiniband.conf. -- Subject: Unit rdma-load-modules@infiniband.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@infiniband.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: The BMC does not support setting the recv irq bit, compensating, but the BMC needs to be fixed. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered named UNIX socket transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered udp transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered tcp transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 655360 ms ovfl timer Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RAPL PMU: hw unit of domain dram 2^-16 Joules Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: Using irq 10 Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-udevd[2427]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x0002a2, prod_id: 0x0100, dev_id: 0x20) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: iTCO_vendor_support: vendor-support=0 Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: iTCO_wdt: Found a Intel PCH TCO device (Version=4, TCOBASE=0x0400) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.4) Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd-modules-load[2554]: Inserted module 'rpcrdma' Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered rdma transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab kernel: RPC: Registered rdma backchannel transport module. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Started Load RDMA modules from /etc/rdma/modules/rdma.conf. -- Subject: Unit rdma-load-modules@rdma.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-load-modules@rdma.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target RDMA Hardware. -- Subject: Unit rdma-hw.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rdma-hw.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:01 hub-master-0.workload.bos2.lab systemd[1]: Reached target Network (Pre). -- Subject: Unit network-pre.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit network-pre.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: checking generic (91000000 300000) vs hw (91000000 1000000) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: fb: switching to mgag200drmfb from EFI VGA Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: Console: switching to colour dummy device 80x25 Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: mgag200 0000:03:00.0: vgaarb: deactivate vga console Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: [drm] Initialized mgag200 1.0.0 20110418 for 0000:03:00.0 on minor 0 Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: fbcon: mgag200drmfb (fb0) is primary device Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: Console: switching to colour frame buffer device 128x48 Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: mgag200 0000:03:00.0: [drm] fb0: mgag200drmfb frame buffer device Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC i10nm: No hbm memory Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC0: Giving out device to module i10nm_edac controller Intel_10nm Socket#0 IMC#0: DEV 0000:7e:0c.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC1: Giving out device to module i10nm_edac controller Intel_10nm Socket#0 IMC#1: DEV 0000:7e:0d.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC2: Giving out device to module i10nm_edac controller Intel_10nm Socket#0 IMC#2: DEV 0000:7e:0e.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC3: Giving out device to module i10nm_edac controller Intel_10nm Socket#0 IMC#3: DEV 0000:7e:0f.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC4: Giving out device to module i10nm_edac controller Intel_10nm Socket#1 IMC#0: DEV 0000:fe:0c.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC5: Giving out device to module i10nm_edac controller Intel_10nm Socket#1 IMC#1: DEV 0000:fe:0d.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC6: Giving out device to module i10nm_edac controller Intel_10nm Socket#1 IMC#2: DEV 0000:fe:0e.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC MC7: Giving out device to module i10nm_edac controller Intel_10nm Socket#1 IMC#3: DEV 0000:fe:0f.0 (INTERRUPT) Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: EDAC i10nm: v0.0.5 Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: Found RAPL domain package Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: Found RAPL domain dram Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: DRAM domain energy unit 15300pj Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: Found RAPL domain package Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: Found RAPL domain dram Jan 23 16:12:02 hub-master-0.workload.bos2.lab kernel: intel_rapl_common: DRAM domain energy unit 15300pj Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started udev Wait for Complete Device Initialization. -- Subject: Unit systemd-udev-settle.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-udev-settle.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Local File Systems (Pre). -- Subject: Unit local-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs-pre.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. -- Subject: Mount point is not empty -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The directory /var is specified as the mount point (second field in -- /etc/fstab or Where= field in systemd unit file) and is not empty. -- This does not interfere with mounting, but the pre-exisiting files in -- this directory become inaccessible. To see those over-mounted files, -- please manually mount the underlying file system to a secondary -- location. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Mounting /var... -- Subject: Unit var.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit var.mount has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting File System Check on /dev/disk/by-uuid/6b5eaf26-520d-4e42-90f4-4869c15c705f... -- Subject: Unit systemd-fsck@dev-disk-by\x2duuid-6b5eaf26\x2d520d\x2d4e42\x2d90f4\x2d4869c15c705f.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-fsck@dev-disk-by\x2duuid-6b5eaf26\x2d520d\x2d4e42\x2d90f4\x2d4869c15c705f.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Mounted /var. -- Subject: Unit var.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit var.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting OSTree Remount OS/ Bind Mounts... -- Subject: Unit ostree-remount.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ostree-remount.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started OSTree Remount OS/ Bind Mounts. -- Subject: Unit ostree-remount.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ostree-remount.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Flush Journal to Persistent Storage... -- Subject: Unit systemd-journal-flush.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-journal-flush.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Load/Save Random Seed... -- Subject: Unit systemd-random-seed.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-random-seed.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-journald[2274]: Time spent on flushing to /var is 10.689ms for 2465 entries. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-journald[2274]: System journal (/var/log/journal/2d1c3c5e56e644be8e0c86aaa5d61f6a) is 408.0M, max 4.0G, 3.6G free. -- Subject: Disk space used by the journal -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- System journal (/var/log/journal/2d1c3c5e56e644be8e0c86aaa5d61f6a) is currently using 408.0M. -- Maximum allowed usage is set to 4.0G. -- Leaving at least 4.0G free (of currently available 431.3G of disk space). -- Enforced usage limit is thus 4.0G, of which 3.6G are still available. -- -- The limits controlling how much disk space is used by the journal may -- be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, -- RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in -- /etc/systemd/journald.conf. See journald.conf(5) for details. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-fsck[2821]: boot: recovering journal Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-fsck[2821]: boot: clean, 324/98304 files, 140535/393216 blocks Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Load/Save Random Seed. -- Subject: Unit systemd-random-seed.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-random-seed.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started File System Check on /dev/disk/by-uuid/6b5eaf26-520d-4e42-90f4-4869c15c705f. -- Subject: Unit systemd-fsck@dev-disk-by\x2duuid-6b5eaf26\x2d520d\x2d4e42\x2d90f4\x2d4869c15c705f.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-fsck@dev-disk-by\x2duuid-6b5eaf26\x2d520d\x2d4e42\x2d90f4\x2d4869c15c705f.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Mounting CoreOS Dynamic Mount for /boot... -- Subject: Unit boot.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit boot.mount has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Flush Journal to Persistent Storage. -- Subject: Unit systemd-journal-flush.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-journal-flush.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab kernel: EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null) Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Mounted CoreOS Dynamic Mount for /boot. -- Subject: Unit boot.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit boot.mount has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Local File Systems. -- Subject: Unit local-fs.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit local-fs.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Restore /run/initramfs on shutdown... -- Subject: Unit dracut-shutdown.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-shutdown.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Create Volatile Files and Directories... -- Subject: Unit systemd-tmpfiles-setup.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Run update-ca-trust... -- Subject: Unit coreos-update-ca-trust.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-update-ca-trust.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Restore /run/initramfs on shutdown. -- Subject: Unit dracut-shutdown.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dracut-shutdown.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: "/home" already exists and is not a directory. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd-tmpfiles[2843]: "/srv" already exists and is not a directory. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Create Volatile Files and Directories. -- Subject: Unit systemd-tmpfiles-setup.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-setup.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Security Auditing Service... -- Subject: Unit auditd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit auditd.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... -- Subject: Unit rhcos-usrlocal-selinux-fixup.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rhcos-usrlocal-selinux-fixup.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... -- Subject: Unit rhcos-selinux-policy-upgrade.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rhcos-selinux-policy-upgrade.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab chcon[2852]: changing security context of '/usr/local/sbin' Jan 23 16:12:06 hub-master-0.workload.bos2.lab rhcos-rebuild-selinux-policy[2853]: RHEL_VERSION=8.6Checking for policy recompilation Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started RHCOS Fix SELinux Labeling For /usr/local/sbin. -- Subject: Unit rhcos-usrlocal-selinux-fixup.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rhcos-usrlocal-selinux-fixup.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab rhcos-rebuild-selinux-policy[2858]: -rw-r--r--. 1 root root 8914149 Jan 22 19:12 /etc/selinux/targeted/policy/policy.31 Jan 23 16:12:06 hub-master-0.workload.bos2.lab rhcos-rebuild-selinux-policy[2858]: -rw-r--r--. 2 root root 8914149 Jan 1 1970 /usr/etc/selinux/targeted/policy/policy.31 Jan 23 16:12:06 hub-master-0.workload.bos2.lab auditd[2861]: No plugins found, not dispatching events Jan 23 16:12:06 hub-master-0.workload.bos2.lab auditd[2861]: Init complete, auditd 3.0.7 listening for events (startup state enable) Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started RHEL CoreOS Rebuild SELinux Policy If Necessary. -- Subject: Unit rhcos-selinux-policy-upgrade.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rhcos-selinux-policy-upgrade.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2864]: /sbin/augenrules: No change Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: No rules Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: enabled 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: failure 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: pid 2861 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: rate_limit 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_limit 8192 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: lost 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog 4 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time 60000 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time_actual 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: enabled 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: failure 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: pid 2861 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: rate_limit 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_limit 8192 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: lost 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time 60000 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time_actual 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: enabled 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: failure 1 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: pid 2861 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: rate_limit 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_limit 8192 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: lost 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog 4 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time 60000 Jan 23 16:12:06 hub-master-0.workload.bos2.lab augenrules[2877]: backlog_wait_time_actual 0 Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Security Auditing Service. -- Subject: Unit auditd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit auditd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Update UTMP about System Boot/Shutdown... -- Subject: Unit systemd-update-utmp.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-update-utmp.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Update UTMP about System Boot/Shutdown. -- Subject: Unit systemd-update-utmp.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-update-utmp.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Run update-ca-trust. -- Subject: Unit coreos-update-ca-trust.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-update-ca-trust.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target System Initialization. -- Subject: Unit sysinit.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sysinit.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Daily rotation of log files. -- Subject: Unit logrotate.timer has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit logrotate.timer has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Monitor console-login-helper-messages runtime issue snippets directory for changes. -- Subject: Unit console-login-helper-messages-issuegen.path has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.path has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Listening on bootupd.socket. -- Subject: Unit bootupd.socket has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit bootupd.socket has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Listening on D-Bus System Message Bus Socket. -- Subject: Unit dbus.socket has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dbus.socket has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Sockets. -- Subject: Unit sockets.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sockets.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started Daily Cleanup of Temporary Directories. -- Subject: Unit systemd-tmpfiles-clean.timer has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-clean.timer has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started OSTree Monitor Staged Deployment. -- Subject: Unit ostree-finalize-staged.path has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ostree-finalize-staged.path has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Paths. -- Subject: Unit paths.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit paths.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Basic System. -- Subject: Unit basic.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit basic.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Generation of shadow ID ranges for CRI-O... -- Subject: Unit crio-subid.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-subid.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started irqbalance daemon. -- Subject: Unit irqbalance.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit irqbalance.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Open vSwitch Database Unit... -- Subject: Unit ovsdb-server.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovsdb-server.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting CRI-O Auto Update Script... -- Subject: Unit crio-wipe.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-wipe.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Create Ignition Status Issue Files... -- Subject: Unit coreos-ignition-write-issues.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-ignition-write-issues.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting System Security Services Daemon... -- Subject: Unit sssd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sssd.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting Load CPU microcode update... -- Subject: Unit microcode.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit microcode.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Starting NTP client/server... -- Subject: Unit chronyd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit chronyd.service has begun starting up. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Started daily update of the root trust anchor for DNSSEC. -- Subject: Unit unbound-anchor.timer has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit unbound-anchor.timer has finished starting up. -- -- The start-up result is done. Jan 23 16:12:06 hub-master-0.workload.bos2.lab systemd[1]: Reached target Timers. -- Subject: Unit timers.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit timers.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started D-Bus System Message Bus. -- Subject: Unit dbus.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit dbus.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Reached target sshd-keygen.target. -- Subject: Unit sshd-keygen.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sshd-keygen.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... -- Subject: Unit console-login-helper-messages-gensnippet-ssh-keys.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-gensnippet-ssh-keys.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab chronyd[2922]: chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Jan 23 16:12:07 hub-master-0.workload.bos2.lab chronyd[2922]: commandkey directive is no longer supported Jan 23 16:12:07 hub-master-0.workload.bos2.lab chronyd[2922]: generatecommandkey directive is no longer supported Jan 23 16:12:07 hub-master-0.workload.bos2.lab chronyd[2922]: Could not read valid frequency and skew from driftfile /var/lib/chrony/drift Jan 23 16:12:07 hub-master-0.workload.bos2.lab chown[2921]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Generate SSH keys snippet for display via console-login-helper-messages. -- Subject: Unit console-login-helper-messages-gensnippet-ssh-keys.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-gensnippet-ssh-keys.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started NTP client/server. -- Subject: Unit chronyd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit chronyd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: crio-subid.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-subid.service has successfully entered the 'dead' state. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Generation of shadow ID ranges for CRI-O. -- Subject: Unit crio-subid.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-subid.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: crio-subid.service: Consumed 16ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-subid.service completed and consumed the indicated resources. Jan 23 16:12:07 hub-master-0.workload.bos2.lab sssd[2907]: Starting up Jan 23 16:12:07 hub-master-0.workload.bos2.lab sssd_be[3019]: Starting up Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: microcode.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit microcode.service has successfully entered the 'dead' state. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Load CPU microcode update. -- Subject: Unit microcode.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit microcode.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: microcode.service: Consumed 47ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit microcode.service completed and consumed the indicated resources. Jan 23 16:12:07 hub-master-0.workload.bos2.lab sssd_nss[3043]: Starting up Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started System Security Services Daemon. -- Subject: Unit sssd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sssd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Reached target User and Group Name Lookups. -- Subject: Unit nss-user-lookup.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit nss-user-lookup.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Login Service... -- Subject: Unit systemd-logind.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-logind.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-logind[3052]: New seat seat0. -- Subject: A new seat seat0 is now available -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new seat seat0 has been configured and is now available. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-logind[3052]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-logind[3052]: Watching system buttons on /dev/input/event1 (Avocent Keyboard/Mouse Function) Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Login Service. -- Subject: Unit systemd-logind.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-logind.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Create Ignition Status Issue Files. -- Subject: Unit coreos-ignition-write-issues.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit coreos-ignition-write-issues.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovsdb-server[3065]: ovs|00002|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovsdb-server[3065]: ovs|00003|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[2970]: Starting ovsdb-server. Jan 23 16:12:07 hub-master-0.workload.bos2.lab crio[2903]: time="2023-01-23 16:12:07.308638756Z" level=info msg="Starting CRI-O, version: 1.25.1-5.rhaos4.12.git6005903.el8, git: unknown(clean)" Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vsctl[3068]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1233855812-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck1233855812-merged.mount has successfully entered the 'dead' state. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vsctl[3076]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.17.4 "external-ids:system-id=\"6e3b0d84-1d66-4174-9852-54771250909a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhcos\"" "system-version=\"4.12\"" Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[2970]: Configuring Open vSwitch system IDs. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vsctl[3082]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=hub-master-0.workload.bos2.lab Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[2970]: Enabling remote OVSDB managers. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Open vSwitch Database Unit. -- Subject: Unit ovsdb-server.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovsdb-server.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Open vSwitch Delete Transient Ports... -- Subject: Unit ovs-delete-transient-ports.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-delete-transient-ports.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Open vSwitch Delete Transient Ports. -- Subject: Unit ovs-delete-transient-ports.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-delete-transient-ports.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Open vSwitch Forwarding Unit... -- Subject: Unit ovs-vswitchd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-vswitchd.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab crio[2903]: time="2023-01-23 16:12:07.419369308Z" level=info msg="Checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Jan 23 16:12:07 hub-master-0.workload.bos2.lab crio[2903]: time="2023-01-23 16:12:07.419913045Z" level=info msg="File /var/lib/crio/clean.shutdown not found. Wiping storage directory /var/lib/containers/storage because of suspected dirty shutdown" Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: openvswitch: Open vSwitch switching datapath Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[3131]: Inserting openvswitch module. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00008|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00009|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00010|stream_ssl|ERR|failed to load client certificates from /ovn-ca/ca-bundle.crt: error:140AD002:SSL routines:SSL_CTX_use_certificate_file:system lib Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device ovs-system entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3148]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3148]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3148]: Could not generate persistent MAC address for ovs-system: No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: Timeout policy base is empty Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: Failed to associated timeout policy `ovs_test_tp' Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device br-ex entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3300]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3300]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3300]: Could not generate persistent MAC address for br-ex: No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3306]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3306]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3306]: Could not generate persistent MAC address for genev_sys_6081: No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device genev_sys_6081 entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3305]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device ovn-k8s-mp0 entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3305]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3305]: Could not generate persistent MAC address for ovn-k8s-mp0: No such file or directory Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device br-int entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3312]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3312]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[3103]: Starting ovs-vswitchd. Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-ctl[3103]: Enabling remote OVSDB managers. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Open vSwitch Forwarding Unit. -- Subject: Unit ovs-vswitchd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-vswitchd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Open vSwitch... -- Subject: Unit openvswitch.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit openvswitch.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Open vSwitch. -- Subject: Unit openvswitch.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit openvswitch.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Network Manager... -- Subject: Unit NetworkManager.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.6911] NetworkManager (version 1.36.0-11.el8_6) is starting... (after a restart) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.6913] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (etc: 20-keyfiles.conf, 99-kni.conf, sdn.conf) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.6930] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Network Manager. -- Subject: Unit NetworkManager.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Network Manager Wait Online... -- Subject: Unit NetworkManager-wait-online.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-wait-online.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Reached target Network. -- Subject: Unit network.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit network.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting OpenSSH server daemon... -- Subject: Unit sshd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sshd.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.7050] manager[0x55db37588000]: monitoring kernel firmware directory '/lib/firmware'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6' (uid=0 pid=3328 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Jan 23 16:12:07 hub-master-0.workload.bos2.lab sshd[3332]: Server listening on 0.0.0.0 port 22. Jan 23 16:12:07 hub-master-0.workload.bos2.lab sshd[3332]: Server listening on :: port 22. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Hostname Service... -- Subject: Unit systemd-hostnamed.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-hostnamed.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started OpenSSH server daemon. -- Subject: Unit sshd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sshd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Hostname Service. -- Subject: Unit systemd-hostnamed.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-hostnamed.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8317] hostname: hostname: using hostnamed Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8317] hostname: static hostname changed from (none) to "hub-master-0.workload.bos2.lab" Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8320] dns-mgr[0x55db3755d250]: init: dns=default,systemd-resolved rc-manager=unmanaged Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8356] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.36.0-11.el8_6/libnm-device-plugin-ovs.so) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8377] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-11.el8_6/libnm-device-plugin-team.so) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8377] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8378] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8378] manager: Networking is enabled by state file Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8380] settings: Loaded settings plugin: keyfile (internal) Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=3328 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8396] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-11.el8_6/libnm-settings-plugin-ifcfg-rh.so") Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8430] dhcp-init: Using DHCP client 'internal' Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8430] device (lo): carrier: link connected Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8431] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8436] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/2) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8437] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8440] device (br-ex): carrier: link connected Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: IPv6: ADDRCONF(NETDEV_UP): br-ex: link is not ready Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8446] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8450] device (eno12409): carrier: link connected Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8454] manager: (eno12409): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8456] device (eno12409): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8470] manager: (eno8303): new Ethernet device (/org/freedesktop/NetworkManager/Devices/5) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8472] device (eno8303): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8481] manager: (eno8403): new Ethernet device (/org/freedesktop/NetworkManager/Devices/6) Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: IPv6: ADDRCONF(NETDEV_UP): eno8303: link is not ready Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8482] device (eno8403): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: IPv6: ADDRCONF(NETDEV_UP): eno8403: link is not ready Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8496] manager: (ens2f0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8498] device (ens2f0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f0: link is not ready Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8512] manager: (ens2f1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/8) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8514] device (ens2f1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: IPv6: ADDRCONF(NETDEV_UP): ens2f1: link is not ready Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8524] manager: (ovn-k8s-mp0): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/9) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8528] device (eno12399): carrier: link connected Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8533] manager: (eno12399): new Ethernet device (/org/freedesktop/NetworkManager/Devices/10) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8535] manager: (eno12399): assume: will attempt to assume matching connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) (indicated) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8535] device (eno12399): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8538] device (eno12399): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8543] device (eno12399): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8546] device (genev_sys_6081): carrier: link connected Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8547] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8551] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/12) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8553] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8556] manager: (eno12399): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8558] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8581] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8583] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8589] manager: (patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8592] manager: (patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8595] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8598] manager: (ovn-85d815-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/18) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8600] manager: (patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8603] manager: (ovn-k8s-mp0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8606] manager: (patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8609] manager: (ovn-61904a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8612] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/23) Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00054|bridge|INFO|bridge br-ex: deleted interface br-ex on port 65534 Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device br-ex left promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: + '[' -z ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: Not a DHCP4 address. Ignoring. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3352]: + exit 0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3353]: + '[' -z ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3353]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3353]: Not a DHCP6 address. Ignoring. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3353]: + exit 0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8830] platform-linux: do-delete-link[9]: failure 19 (No such device) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8832] device (br-ex): state change: unavailable -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8835] device (eno12409): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8845] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8846] dhcp4 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8848] device (br-ex): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8928] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8931] policy: auto-activating connection 'br-ex' (eb93fb32-d8f0-4a0b-bcd1-710cd9810b67) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8932] policy: auto-activating connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8934] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8938] device (br-ex): Activation: starting connection 'br-ex' (eb93fb32-d8f0-4a0b-bcd1-710cd9810b67) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8939] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8942] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8944] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8945] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8946] manager: NetworkManager state is now CONNECTING Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8947] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8954] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8956] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8959] device (br-ex): Activation: starting connection 'ovs-port-br-ex' (e0341fa2-7cf4-4dd3-98ec-07b79fa5b7ed) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8961] device (eno12399): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8964] device (eno12399): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8967] device (eno12399): Activation: starting connection 'ovs-port-phys0' (22e3cb59-60aa-454b-8810-5b09f69c037d) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8967] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00055|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 1 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8969] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8970] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8972] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8974] device (br-ex): state change: prepare -> failed (reason 'dependency-failed', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8976] device (br-ex): Activation: failed for connection 'ovs-if-br-ex' Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8989] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8993] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8994] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8997] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8998] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.8998] device (br-ex): Activation: connection 'ovs-port-br-ex' enslaved, continuing activation Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9000] device (eno12399): disconnecting for new activation request. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9000] device (eno12399): state change: ip-config -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9006] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9008] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9009] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9011] device (eno12399): Activation: connection 'ovs-port-phys0' enslaved, continuing activation Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3356]: Error: Device '' not found. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9012] device (br-ex): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9029] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9032] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9035] device (br-ex): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9038] device (eno12399): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: + '[' -z ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: Not a DHCP4 address. Ignoring. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3374]: + exit 0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3375]: + '[' -z ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3375]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3375]: Not a DHCP6 address. Ignoring. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3375]: + exit 0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9182] dhcp4 (eno12399): canceled DHCP transaction Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9188] device (eno12399): Activation: starting connection 'ovs-if-phys0' (c5e9de2a-1ee5-4c3e-801c-4009076b6ab4) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9195] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9197] policy: auto-activating connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9202] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9204] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9207] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9208] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9210] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9211] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9215] device (eno12399): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9216] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9218] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9219] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9223] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9224] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00056|bridge|INFO|bridge br-ex: added interface eno12399 on port 3 Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00057|netdev|WARN|failed to set MTU for network device br-ex: No such device Jan 23 16:12:07 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00058|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9330] device (br-ex): set-hw-addr: set-cloned MAC address to B4:96:91:C8:A6:30 (B4:96:91:C8:A6:30) Jan 23 16:12:07 hub-master-0.workload.bos2.lab kernel: device br-ex entered promiscuous mode Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9336] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3388]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd-udevd[3388]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3378]: Error: Device '' not found. Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9363] dhcp4 (br-ex): state changed new lease, address=192.168.18.12 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9365] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv4 routing and DNS Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.6' (uid=0 pid=3328 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Jan 23 16:12:07 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + INTERFACE_NAME=br-ex Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + OPERATION=pre-up Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + '[' pre-up '!=' pre-up ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3392]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3393]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9428] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + INTERFACE_CONNECTION_UUID= Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + '[' '' == '' ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3390]: + exit 0 Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9560] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9561] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:07 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490327.9563] device (br-ex): Activation: successful, device activated. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + INTERFACE_NAME=eno12399 Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + OPERATION=pre-up Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' pre-up '!=' pre-up ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3406]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3408]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 13ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Permit User Sessions... -- Subject: Unit systemd-user-sessions.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-user-sessions.service has begun starting up. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Permit User Sessions. -- Subject: Unit systemd-user-sessions.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-user-sessions.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + INTERFACE_CONNECTION_UUID=c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 == '' ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3425]: ++ awk -F : '{print $NF}' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3424]: ++ nmcli -t -f connection.slave-type conn show c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Started Getty on tty1. -- Subject: Unit getty@tty1.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit getty@tty1.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab systemd[1]: Reached target Login Prompts. -- Subject: Unit getty.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit getty.target has finished starting up. -- -- The start-up result is done. Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3432]: ++ awk -F : '{print $NF}' Jan 23 16:12:07 hub-master-0.workload.bos2.lab nm-dispatcher[3431]: ++ nmcli -t -f connection.master conn show c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + PORT=22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' 22e3cb59-60aa-454b-8810-5b09f69c037d == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3437]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3438]: ++ awk -F : '{if( ($1=="22e3cb59-60aa-454b-8810-5b09f69c037d" || $3=="22e3cb59-60aa-454b-8810-5b09f69c037d") && $2~/^ovs*/) print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + PORT_CONNECTION_UUID=22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' 22e3cb59-60aa-454b-8810-5b09f69c037d == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3444]: ++ nmcli -t -f connection.slave-type conn show 22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3445]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3451]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3450]: ++ nmcli -t -f connection.master conn show 22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + BRIDGE_NAME=br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' br-ex '!=' br-ex ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + ovs-vsctl list interface eno12399 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + declare -A INTERFACES Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + '[' '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ get_interface_ofport_request Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ declare -A ofport_requests Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3457]: +++ ovs-vsctl get Interface eno12399 ofport Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ local current_ofport=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ '[' '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ echo 3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3456]: ++ return Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + INTERFACES[$INTERFACE_NAME]=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab ovs-vsctl[3459]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3402]: + declare -p INTERFACES Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.0810] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.0812] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.0814] device (eno12399): Activation: successful, device activated. Jan 23 16:12:08 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2603:c020:6:b900:5e7:2ec:2cdb:c668 offline Jan 23 16:12:08 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2604:a880:800:a1::ec9:5001 offline Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3472]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3472]: + [[ Wired Connection == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3472]: + echo 'Refusing to modify default connection.' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3472]: Refusing to modify default connection. Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3472]: + exit 0 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3473]: + '[' -z ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3473]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3473]: Not a DHCP6 address. Ignoring. Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3473]: + exit 0 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + INTERFACE_NAME=br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + OPERATION=pre-up Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + '[' pre-up '!=' pre-up ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3485]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3486]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + INTERFACE_CONNECTION_UUID= Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + '[' '' == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3483]: + exit 0 Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.1422] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.1424] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.1426] device (br-ex): Activation: successful, device activated. Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + INTERFACE_NAME=eno12399 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + OPERATION=pre-up Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' pre-up '!=' pre-up ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3492]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3494]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + INTERFACE_CONNECTION_UUID=c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3500]: ++ nmcli -t -f connection.slave-type conn show c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3501]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3506]: ++ nmcli -t -f connection.master conn show c5e9de2a-1ee5-4c3e-801c-4009076b6ab4 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3507]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + PORT=22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' 22e3cb59-60aa-454b-8810-5b09f69c037d == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3512]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3513]: ++ awk -F : '{if( ($1=="22e3cb59-60aa-454b-8810-5b09f69c037d" || $3=="22e3cb59-60aa-454b-8810-5b09f69c037d") && $2~/^ovs*/) print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + PORT_CONNECTION_UUID=22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' 22e3cb59-60aa-454b-8810-5b09f69c037d == '' ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3518]: ++ nmcli -t -f connection.slave-type conn show 22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3519]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3524]: ++ nmcli -t -f connection.master conn show 22e3cb59-60aa-454b-8810-5b09f69c037d Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3525]: ++ awk -F : '{print $NF}' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + BRIDGE_NAME=br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' br-ex '!=' br-ex ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + ovs-vsctl list interface eno12399 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + declare -A INTERFACES Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + cat /run/ofport_requests.br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3530]: declare -A INTERFACES=([eno12399]="3" ) Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + source /run/ofport_requests.br-ex Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: ++ INTERFACES=([eno12399]="3") Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: ++ declare -A INTERFACES Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + '[' a ']' Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab ovs-vsctl[3531]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3490]: + declare -p INTERFACES Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.2502] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.2504] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490328.2507] device (eno12399): Activation: successful, device activated. Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender triggered by br-ex dhcp4-change. Jan 23 16:12:08 hub-master-0.workload.bos2.lab nm-dispatcher[3549]: nameserver 192.168.18.9 Jan 23 16:12:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler92)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing arp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:80:00:01,dl_dst=ff:ff:ff:ff:ff:ff,arp_spa=10.128.0.1,arp_tpa=10.128.0.58,arp_op=1,arp_sha=0a:58:0a:80:00:01,arp_tha=00:00:00:00:00:00 Jan 23 16:12:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler81)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=3,vlan_tci=0x0000,dl_src=0a:58:64:40:00:01,dl_dst=0a:58:64:40:00:02,nw_src=10.130.0.53,nw_dst=192.168.18.14,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=57610,tp_dst=2379,tcp_flags=rst Jan 23 16:12:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490329.0403] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:12:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490329.0407] policy: set 'Wired Connection' (eno12409) as default for IPv6 routing and DNS Jan 23 16:12:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490329.2225] dhcp6 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:12:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490329.2227] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv6 routing and DNS Jan 23 16:12:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490329.2235] dhcp6 (br-ex): state changed new lease, address=2600:52:7:18::12 Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: crio-wipe.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-wipe.service has successfully entered the 'dead' state. Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: Started CRI-O Auto Update Script. -- Subject: Unit crio-wipe.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-wipe.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: crio-wipe.service: Consumed 1.318s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-wipe.service completed and consumed the indicated resources. Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:09 hub-master-0.workload.bos2.lab nm-dispatcher[3571]: NM resolv-prepender: Starting download of baremetal runtime cfg image Jan 23 16:12:09 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Trying to pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67... Jan 23 16:12:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:09 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender: Waiting for baremetal runtime cfg image to be available Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Getting image source signatures Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying blob sha256:689ae2a15bc971f1f75a2fbb0ca160ef6ef887e5f55a059dc4d606c69750d5a3 Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying blob sha256:d8190195889efb5333eeec18af9b6c82313edd4db62989bd3a357caca4f13f0e Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying blob sha256:97da74cc6d8fa5d1634eb1760fd1da5c6048619c264c23e62d75f3bf6b8ef5c4 Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying blob sha256:f0f4937bc70fa7bf9afc1eb58400dbc646c9fd0c9f95cfdbfcdedd55f6fa0bcd Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying blob sha256:833de2b0ccff7a77c31b4d2e3f96077b638aada72bfde75b5eddd5903dc11bb7 Jan 23 16:12:10 hub-master-0.workload.bos2.lab ovs-vsctl[3609]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=hub-master-0.workload.bos2.lab Jan 23 16:12:10 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler28)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:80:00:01,dl_dst=0a:58:0a:80:00:04,nw_src=10.129.0.39,nw_dst=10.128.0.4,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=43444,tp_dst=8080,tcp_flags=syn Jan 23 16:12:10 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:10 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender: Waiting for baremetal runtime cfg image to be available Jan 23 16:12:11 hub-master-0.workload.bos2.lab chronyd[2922]: Selected source 192.168.18.9 Jan 23 16:12:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:13 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender: Waiting for baremetal runtime cfg image to be available Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Copying config sha256:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934 Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Writing manifest to image destination Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: Storing signatures Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3573]: ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934 Jan 23 16:12:15 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3571]: NM resolv-prepender: Download of baremetal runtime cfg image completed Jan 23 16:12:15 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:15 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender: Waiting for baremetal runtime cfg image to be available Jan 23 16:12:16 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:16 hub-master-0.workload.bos2.lab systemd[1]: Created slice machine.slice. -- Subject: Unit machine.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:12:16 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope. -- Subject: Unit libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:16 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58. -- Subject: Unit libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:16 hub-master-0.workload.bos2.lab kernel: cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation Jan 23 16:12:16 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:16 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="retrieved Address map map[0xc00018cea0:[127.0.0.1/8 lo ::1/128] 0xc00018db00:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:17Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:17Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:17Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3782]: time="2023-01-23T16:12:17Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has successfully entered the 'dead' state. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope completed and consumed the indicated resources. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope has successfully entered the 'dead' state. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope: Consumed 118ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58.scope completed and consumed the indicated resources. Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3872]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3548]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3247] audit: op="reload" arg="2" pid=3882 uid=0 result="success" Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3247] config: signal: DNS_RC Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: + '[' -z ']' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: Not a DHCP4 address. Ignoring. Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3887]: + exit 0 Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3888]: + '[' -z ']' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3888]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3888]: Not a DHCP6 address. Ignoring. Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3888]: + exit 0 Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + INTERFACE_NAME=br-ex Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + OPERATION=pre-up Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + '[' pre-up '!=' pre-up ']' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3897]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3898]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + INTERFACE_CONNECTION_UUID= Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + '[' '' == '' ']' Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3895]: + exit 0 Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3707] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3709] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3710] manager: NetworkManager state is now CONNECTED_SITE Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3711] device (br-ex): Activation: successful, device activated. Jan 23 16:12:17 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490337.3714] manager: NetworkManager state is now CONNECTED_GLOBAL Jan 23 16:12:17 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2603:c020:6:b900:5e7:2ec:2cdb:c668 online Jan 23 16:12:17 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2604:a880:800:a1::ec9:5001 online Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3922]: NM resolv-prepender triggered by br-ex up. Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3923]: nameserver 2600:52:7:18::9 Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3923]: nameserver 192.168.18.9 Jan 23 16:12:17 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00059|memory|INFO|461728 kB peak resident set size after 10.1 seconds Jan 23 16:12:17 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00060|memory|INFO|handlers:112 idl-cells:530 ports:8 revalidators:29 rules:9 udpif keys:63 Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-fa70e45bc88036b6e659b0c8263800187826f18389498f08c57b31a799f84e58-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope. -- Subject: Unit libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8. -- Subject: Unit libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="retrieved Address map map[0xc000337440:[127.0.0.1/8 lo ::1/128] 0xc00036d0e0:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:17 hub-master-0.workload.bos2.lab nm-dispatcher[3960]: time="2023-01-23T16:12:17Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has successfully entered the 'dead' state. Jan 23 16:12:17 hub-master-0.workload.bos2.lab systemd[1]: libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope completed and consumed the indicated resources. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope: Consumed 101ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8.scope completed and consumed the indicated resources. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4040]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[3922]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490338.2382] audit: op="reload" arg="2" pid=4050 uid=0 result="success" Jan 23 16:12:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490338.2383] config: signal: DNS_RC Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: + [[ ovs-port-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: + '[' -z ']' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: Not a DHCP4 address. Ignoring. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4055]: + exit 0 Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4056]: + '[' -z ']' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4056]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4056]: Not a DHCP6 address. Ignoring. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4056]: + exit 0 Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: + '[' -z ']' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: Not a DHCP4 address. Ignoring. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4073]: + exit 0 Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4074]: + '[' -z ']' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4074]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4074]: Not a DHCP6 address. Ignoring. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4074]: + exit 0 Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4077]: Error: Device '' not found. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4098]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4099]: nameserver 2600:52:7:18::9 Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4099]: nameserver 192.168.18.9 Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope. -- Subject: Unit libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-67512e59da67fceab29a75ee458a848eebb9dd63deca66f654761e9df9a2f1ea-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-67512e59da67fceab29a75ee458a848eebb9dd63deca66f654761e9df9a2f1ea-merged.mount has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-b0d2a43ddd606df6ec26834c94661f404b3afe2bf67c869d8b9deaa90b510ef8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc. -- Subject: Unit libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="retrieved Address map map[0xc0003399e0:[127.0.0.1/8 lo ::1/128] 0xc0002f0120:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:18 hub-master-0.workload.bos2.lab nm-dispatcher[4133]: time="2023-01-23T16:12:18Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope: Consumed 55ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope completed and consumed the indicated resources. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-cdc1be7de35664661084b20d8d1b1c5d6d6317da17a7efad587cc5c4ea61ce8b-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-cdc1be7de35664661084b20d8d1b1c5d6d6317da17a7efad587cc5c4ea61ce8b-merged.mount has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope has successfully entered the 'dead' state. Jan 23 16:12:18 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope: Consumed 96ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-64857b4f1f9a4a6fdc05fda1f539f0b9b52bd525d9e8741f44f956d7eba818dc.scope completed and consumed the indicated resources. Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4213]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4098]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490339.1590] audit: op="reload" arg="2" pid=4223 uid=0 result="success" Jan 23 16:12:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490339.1590] config: signal: DNS_RC Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: + [[ ovs-port-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: + '[' -z ']' Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: Not a DHCP4 address. Ignoring. Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4228]: + exit 0 Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4229]: + '[' -z ']' Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4229]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4229]: Not a DHCP6 address. Ignoring. Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4229]: + exit 0 Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4261]: NM resolv-prepender triggered by br-ex up. Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4262]: nameserver 2600:52:7:18::9 Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4262]: nameserver 192.168.18.9 Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope. -- Subject: Unit libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d. -- Subject: Unit libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="retrieved Address map map[0xc0001a70e0:[127.0.0.1/8 lo ::1/128] 0xc0001a7d40:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:19 hub-master-0.workload.bos2.lab nm-dispatcher[4297]: time="2023-01-23T16:12:19Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has successfully entered the 'dead' state. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope completed and consumed the indicated resources. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-1e7f2fdc5d1b3c6c4f8c056c8c8edcac4da6a0f98cd22d336f43fdcb6eb8c576-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-1e7f2fdc5d1b3c6c4f8c056c8c8edcac4da6a0f98cd22d336f43fdcb6eb8c576-merged.mount has successfully entered the 'dead' state. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope has successfully entered the 'dead' state. Jan 23 16:12:19 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope: Consumed 110ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c805c37ccdcba5c3b69dca5b4d6c5a37196b0194d65311d35bf871386301eb0d.scope completed and consumed the indicated resources. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4383]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4261]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490340.0525] audit: op="reload" arg="2" pid=4393 uid=0 result="success" Jan 23 16:12:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490340.0526] config: signal: DNS_RC Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: + [[ br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: + '[' -z ']' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: Not a DHCP4 address. Ignoring. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4398]: + exit 0 Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4399]: + '[' -z ']' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4399]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4399]: Not a DHCP6 address. Ignoring. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4399]: + exit 0 Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4425]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4426]: nameserver 2600:52:7:18::9 Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4426]: nameserver 192.168.18.9 Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 13ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope. -- Subject: Unit libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09. -- Subject: Unit libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="retrieved Address map map[0xc0002f2120:[127.0.0.1/8 lo ::1/128] 0xc0002f2d80:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4473]: time="2023-01-23T16:12:20Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has successfully entered the 'dead' state. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope completed and consumed the indicated resources. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope has successfully entered the 'dead' state. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope: Consumed 102ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-2341b7c3b4df6cb02feddaa2d42eeb01e0346ceae89ed76f22521bae97a21c09.scope completed and consumed the indicated resources. Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4553]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4425]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490340.9436] audit: op="reload" arg="2" pid=4563 uid=0 result="success" Jan 23 16:12:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490340.9437] config: signal: DNS_RC Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: + '[' -z ']' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: Not a DHCP4 address. Ignoring. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4568]: + exit 0 Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4569]: + '[' -z ']' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4569]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4569]: Not a DHCP6 address. Ignoring. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4569]: + exit 0 Jan 23 16:12:20 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4591]: NM resolv-prepender triggered by br-ex dhcp6-change. Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4592]: nameserver 2600:52:7:18::9 Jan 23 16:12:20 hub-master-0.workload.bos2.lab nm-dispatcher[4592]: nameserver 192.168.18.9 Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope. -- Subject: Unit libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4. -- Subject: Unit libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="retrieved Address map map[0xc0001c7680:[127.0.0.1/8 lo ::1/128] 0xc0003ae360:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4630]: time="2023-01-23T16:12:21Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has successfully entered the 'dead' state. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope completed and consumed the indicated resources. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope has successfully entered the 'dead' state. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope: Consumed 116ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-cc86b9f4a300b009064a25b55bddc5e87fb2d819b2d65736b7a2b8f84fc002f4.scope completed and consumed the indicated resources. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4718]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4591]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:21 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490341.8242] audit: op="reload" arg="2" pid=4728 uid=0 result="success" Jan 23 16:12:21 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490341.8242] config: signal: DNS_RC Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: + '[' -z ']' Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: Not a DHCP4 address. Ignoring. Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4733]: + exit 0 Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4734]: + '[' -z ']' Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4734]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4734]: Not a DHCP6 address. Ignoring. Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4734]: + exit 0 Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4784]: NM resolv-prepender triggered by br-ex up. Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4789]: nameserver 2600:52:7:18::9 Jan 23 16:12:21 hub-master-0.workload.bos2.lab nm-dispatcher[4789]: nameserver 192.168.18.9 Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:12:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 12ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope. -- Subject: Unit libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1. -- Subject: Unit libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="retrieved Address map map[0xc000376ea0:[127.0.0.1/8 lo ::1/128] 0xc000320240:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4837]: time="2023-01-23T16:12:22Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has successfully entered the 'dead' state. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope completed and consumed the indicated resources. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope has successfully entered the 'dead' state. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope: Consumed 118ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-9fe313f1b94dadb6245f3e69409746d0ac22de32d3bc6f76d1819e532060a7c1.scope completed and consumed the indicated resources. Jan 23 16:12:22 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4922]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4784]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:12:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490342.7270] audit: op="reload" arg="2" pid=4932 uid=0 result="success" Jan 23 16:12:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490342.7271] config: signal: DNS_RC Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + '[' -z 192.168.18.12 ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + '[' 86400 -lt 4294967295 ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + echo 'Not an infinite DHCP4 lease. Ignoring.' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: Not an infinite DHCP4 lease. Ignoring. Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4937]: + exit 0 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + '[' -z 2600:52:7:18::12 ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4940]: ++ ip -j -6 a show br-ex Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4941]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .preferred_life_time' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + LEASE_TIME=43187 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4943]: ++ ip -j -6 a show br-ex Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4944]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .prefixlen' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + PREFIX_LEN=128 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + '[' 43187 -lt 4294967295 ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + echo 'Not an infinite DHCP6 lease. Ignoring.' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: Not an infinite DHCP6 lease. Ignoring. Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4938]: + exit 0 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: + '[' -z ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: Not a DHCP4 address. Ignoring. Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4961]: + exit 0 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4962]: + '[' -z ']' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4962]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4962]: Not a DHCP6 address. Ignoring. Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4962]: + exit 0 Jan 23 16:12:22 hub-master-0.workload.bos2.lab nm-dispatcher[4965]: Error: Device '' not found. Jan 23 16:12:32 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jan 23 16:12:32 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-dispatcher.service: Consumed 17.922s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service completed and consumed the indicated resources. Jan 23 16:12:37 hub-master-0.workload.bos2.lab systemd[1]: systemd-hostnamed.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jan 23 16:12:37 hub-master-0.workload.bos2.lab systemd[1]: systemd-hostnamed.service: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-hostnamed.service completed and consumed the indicated resources. Jan 23 16:13:01 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler12)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing arp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:80:00:01,dl_dst=ff:ff:ff:ff:ff:ff,arp_spa=10.128.0.1,arp_tpa=10.128.0.58,arp_op=1,arp_sha=0a:58:0a:80:00:01,arp_tha=00:00:00:00:00:00 Jan 23 16:13:03 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler98)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:80:00:01,dl_dst=0a:58:0a:80:00:04,nw_src=10.129.0.39,nw_dst=10.128.0.4,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=49854,tp_dst=8080,tcp_flags=syn Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-wait-online.service has entered the 'failed' state with result 'exit-code'. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: Failed to start Network Manager Wait Online. -- Subject: Unit NetworkManager-wait-online.service has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-wait-online.service has failed. -- -- The result is failed. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-wait-online.service: Consumed 74ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-wait-online.service completed and consumed the indicated resources. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: Starting Writes IP address configuration so that kubelet and crio services select a valid node IP... -- Subject: Unit nodeip-configuration.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit nodeip-configuration.service has begun starting up. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58. -- Subject: Unit libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="retrieved Address map map[0xc00018b7a0:[127.0.0.1/8 lo ::1/128] 0xc0003ac480:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 13 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 13:[{Ifindex: 13 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing Kubelet service override with content [Service]\nEnvironment=\"KUBELET_NODE_IP=192.168.18.12\" \"KUBELET_NODE_IPS=192.168.18.12,2600:52:7:18::12\"\n" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Opening path /etc/systemd/system/kubelet.service.d/20-nodenet.conf" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing path /etc/systemd/system/kubelet.service.d/20-nodenet.conf with content [Service]\nEnvironment=\"KUBELET_NODE_IP=192.168.18.12\" \"KUBELET_NODE_IPS=192.168.18.12,2600:52:7:18::12\"\n" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing CRIO service override with content [Service]\nEnvironment=\"CONTAINER_STREAM_ADDRESS=192.168.18.12\"\n" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Opening path /etc/systemd/system/crio.service.d/20-nodenet.conf" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing path /etc/systemd/system/crio.service.d/20-nodenet.conf with content [Service]\nEnvironment=\"CONTAINER_STREAM_ADDRESS=192.168.18.12\"\n" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Opening path /run/nodeip-configuration/primary-ip" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing path /run/nodeip-configuration/primary-ip with content 192.168.18.12" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Opening path /run/nodeip-configuration/ipv4" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing path /run/nodeip-configuration/ipv4 with content 192.168.18.12" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Opening path /run/nodeip-configuration/ipv6" Jan 23 16:13:07 hub-master-0.workload.bos2.lab bash[4991]: time="2023-01-23T16:13:07Z" level=info msg="Writing path /run/nodeip-configuration/ipv6 with content 2600:52:7:18::12" Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope has successfully entered the 'dead' state. Jan 23 16:13:07 hub-master-0.workload.bos2.lab systemd[1]: libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58.scope completed and consumed the indicated resources. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-07ff1c29a89c4a564a1b3b9f259a4cec11ace333d9f71e4f0853ea1ac6ae5b58-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Reloading. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: nodeip-configuration.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit nodeip-configuration.service has successfully entered the 'dead' state. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Started Writes IP address configuration so that kubelet and crio services select a valid node IP. -- Subject: Unit nodeip-configuration.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit nodeip-configuration.service has finished starting up. -- -- The start-up result is done. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: nodeip-configuration.service: Consumed 225ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit nodeip-configuration.service completed and consumed the indicated resources. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Starting Configures OVS with proper host networking configuration... -- Subject: Unit ovs-configuration.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-configuration.service has begun starting up. Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + touch /var/run/ovs-config-executed Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + NM_CONN_PATH=/etc/NetworkManager/system-connections Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_SUFFIX=-slave-ovs-clone Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + BRIDGE_METRIC=48 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + BRIDGE1_METRIC=49 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + trap handle_exit EXIT Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '!' -f /etc/cno/mtu-migration/config ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Cleaning up left over mtu migration configuration' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Cleaning up left over mtu migration configuration Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -rf /etc/cno/mtu-migration Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5128]: + rpm -qa Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5129]: + grep -q openvswitch Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + print_state Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Current device, connection, interface and routing state:' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Current device, connection, interface and routing state: Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5131]: + nmcli -g all device Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: + grep -v unmanaged Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: br-ex:ovs-interface:connected:full:full:/org/freedesktop/NetworkManager/Devices/2:ovs-if-br-ex:aa79fb48-31f3-48d3-9929-9b8ddeeff9b8:/org/freedesktop/NetworkManager/ActiveConnection/8 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: eno12399:ethernet:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/10:ovs-if-phys0:c5e9de2a-1ee5-4c3e-801c-4009076b6ab4:/org/freedesktop/NetworkManager/ActiveConnection/7 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: br-ex:ovs-bridge:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/12:br-ex:eb93fb32-d8f0-4a0b-bcd1-710cd9810b67:/org/freedesktop/NetworkManager/ActiveConnection/3 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: br-ex:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/14:ovs-port-br-ex:e0341fa2-7cf4-4dd3-98ec-07b79fa5b7ed:/org/freedesktop/NetworkManager/ActiveConnection/5 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: eno12399:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/13:ovs-port-phys0:22e3cb59-60aa-454b-8810-5b09f69c037d:/org/freedesktop/NetworkManager/ActiveConnection/6 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: eno12409:ethernet:connecting (getting IP configuration):none:none:/org/freedesktop/NetworkManager/Devices/4:Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:/org/freedesktop/NetworkManager/ActiveConnection/2 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: eno8303:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/5::: Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: eno8403:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/6::: Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: ens2f0:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/7::: Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5132]: ens2f1:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/8::: Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli -g all connection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674437287:Mon Jan 23 01\:28\:07 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:eno12409:activating:/org/freedesktop/NetworkManager/ActiveConnection/2::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: ovs-if-br-ex:aa79fb48-31f3-48d3-9929-9b8ddeeff9b8:ovs-interface:1674490337:Mon Jan 23 16\:12\:17 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/7:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/8:ovs-port:/etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: br-ex:eb93fb32-d8f0-4a0b-bcd1-710cd9810b67:ovs-bridge:1674490328:Mon Jan 23 16\:12\:08 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/3:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/3::/etc/NetworkManager/system-connections/br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: ovs-if-phys0:c5e9de2a-1ee5-4c3e-801c-4009076b6ab4:802-3-ethernet:1674490328:Mon Jan 23 16\:12\:08 2023:yes:100:no:/org/freedesktop/NetworkManager/Settings/6:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/7:ovs-port:/etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: ovs-port-br-ex:e0341fa2-7cf4-4dd3-98ec-07b79fa5b7ed:ovs-port:1674490327:Mon Jan 23 16\:12\:07 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/5:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/5:ovs-bridge:/etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: ovs-port-phys0:22e3cb59-60aa-454b-8810-5b09f69c037d:ovs-port:1674490328:Mon Jan 23 16\:12\:08 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/4:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/6:ovs-bridge:/etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5136]: Wired Connection:8105e4a7-d75c-4c11-b250-7d472ed203fe:802-3-ethernet:0:never:yes:0:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -d address show Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet 127.0.0.1/8 scope host lo Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft forever preferred_lft forever Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet6 ::1/128 scope host Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft forever preferred_lft forever Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 2: eno8303: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether b0:7b:25:de:1a:bc brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 3: eno12399: mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9702 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch_slave numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 4: ens2f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether 04:3f:72:fe:d9:b8 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 5: eno8403: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether b0:7b:25:de:1a:bd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 6: ens2f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether 04:3f:72:fe:d9:b9 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 7: eno12409: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether b4:96:91:c8:a6:31 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet6 fe80::b696:91ff:fec8:a631/64 scope link noprefixroute Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft forever preferred_lft forever Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 8: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether e6:05:44:0a:7c:b5 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 10: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether c6:76:de:0c:d9:da brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet6 fe80::c476:deff:fe0c:d9da/64 scope link Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft forever preferred_lft forever Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 11: ovn-k8s-mp0: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether 12:16:15:ff:96:b9 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 12: br-int: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether ba:22:7f:9b:cf:d8 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: 13: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet 192.168.18.12/25 brd 192.168.18.127 scope global dynamic noprefixroute br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft 86339sec preferred_lft 86339sec Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet6 2600:52:7:18::12/128 scope global dynamic noprefixroute Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft 43141sec preferred_lft 43141sec Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: inet6 fe80::b696:91ff:fec8:a630/64 scope link noprefixroute Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5140]: valid_lft forever preferred_lft forever Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip route show Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vsctl[5166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5292] device (eno12399): state change: activated -> deactivating (reason 'removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5141]: default via 192.168.18.1 dev br-ex proto dhcp src 192.168.18.12 metric 48 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5141]: 192.168.18.0/25 dev br-ex proto kernel scope link src 192.168.18.12 metric 48 Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00061|bridge|INFO|bridge br-ex: deleted interface br-ex on port 65534 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5295] device (eno12399): releasing ovs interface eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -6 route show Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00062|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 3 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5296] ovsdb: Unknown interface 'ea8af0ab-4923-4118-a23e-a9a43f2a744a' in port 'e6b05a3c-8648-40a3-9c7c-d9638a93274c' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: ::1 dev lo proto kernel metric 256 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: 2600:52:7:18::12 dev br-ex proto kernel metric 48 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: 2600:52:7:18::/64 dev br-ex proto ra metric 48 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: 2600:52:7:18::/64 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: fe80::/64 dev eno12409 proto kernel metric 1024 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: fe80::/64 dev br-ex proto kernel metric 1024 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: default via fe80::1532:4e62:7604:4733 dev br-ex proto ra metric 48 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5142]: default via fe80::1532:4e62:7604:4733 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00063|bridge|INFO|bridge br-ex: deleted interface patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int on port 2 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5296] device (eno12399): released from master device eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' OVNKubernetes == OVNKubernetes ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovnk_config_dir=/etc/ovnk Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovnk_var_dir=/var/lib/ovnk Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_bridge_file=/etc/ovnk/extra_bridge Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface_default_hint_file=/var/lib/ovnk/iface_default_hint Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip_hint_file=/run/nodeip-configuration/primary-ip Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + mkdir -p /etc/ovnk Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + mkdir -p /var/lib/ovnk Jan 23 16:13:08 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=3328 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5302] device (br-ex): state change: activated -> deactivating (reason 'removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ '[' -f /var/lib/ovnk/iface_default_hint ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5303] manager: NetworkManager state is now CONNECTING Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5146]: +++ cat /var/lib/ovnk/iface_default_hint Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler167)|WARN|received packet on unknown port 3 on bridge br-ex while processing tcp,in_port=3,vlan_tci=0x0000,dl_src=b4:96:91:c8:a0:60,dl_dst=b4:96:91:c8:a6:30,nw_src=192.168.18.14,nw_dst=192.168.18.12,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=56868,tp_dst=2380,tcp_flags=syn Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.5305] device (br-ex): releasing ovs interface br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ local iface_default_hint=eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ '[' eno12399 '!=' '' ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ '[' eno12399 '!=' br-ex ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ '[' eno12399 '!=' br-ex1 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ '[' -d /sys/class/net/eno12399 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ echo eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5145]: ++ return Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface_default_hint=eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' eno12399 == '' ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /var/lib/ovnk/iface_default_hint ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '!' -f /run/configure-ovs-boot-done ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Running on boot, restoring previous configuration before proceeding...' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Running on boot, restoring previous configuration before proceeding... Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rollback_nm Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5148]: ++ get_bridge_physical_interface ovs-if-phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5148]: ++ local bridge_interface=ovs-if-phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5148]: ++ local physical_interface= Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5149]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5148]: ++ physical_interface=eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5148]: ++ echo eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + phys0=eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5154]: ++ get_bridge_physical_interface ovs-if-phys1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5154]: ++ local bridge_interface=ovs-if-phys1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5154]: ++ local physical_interface= Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5155]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5155]: +++ echo '' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5154]: ++ physical_interface= Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5154]: ++ echo '' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + phys1= Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + remove_all_ovn_bridges Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Reverting any previous OVS configuration' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Reverting any previous OVS configuration Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + remove_ovn_bridges br-ex phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_name=br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + port_name=phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + update_nm_conn_files br-ex phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_name=br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + port_name=phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_port=ovs-port-br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_interface=ovs-if-br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + default_port_name=ovs-port-phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_interface_name=ovs-if-phys0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES=($(echo "${NM_CONN_PATH}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5160]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -s nullglob Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES+=(${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}) Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -u nullglob Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm_nm_conn_files Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -f /etc/NetworkManager/system-connections/br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device br-ex left promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6362] dhcp4 (br-ex): canceled DHCP transaction Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6363] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6363] dhcp4 (br-ex): state changed no lease Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6365] dhcp6 (br-ex): canceled DHCP transaction Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6365] dhcp6 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6365] dhcp6 (br-ex): state changed no lease Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6367] device (br-ex): released from master device br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5324]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5325]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5325]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5325]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5325]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00064|dpif|WARN|Dropped 1 log messages in last 61 seconds (most recently, 61 seconds ago) due to excessive rate Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00065|dpif|WARN|system@ovs-system: failed to query port patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int: Invalid argument Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6806] device (eno12399): state change: deactivating -> disconnected (reason 'removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6816] policy: set 'Wired Connection' (eno12409) as default for IPv6 routing and DNS Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -d /sys/class/net/br-ex1 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'OVS configuration successfully reverted' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: OVS configuration successfully reverted Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + reload_profiles_nm eno12399 '' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 1 -eq 0 ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection reload Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6839] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6839] device (patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab): state change: unmanaged -> activated (reason 'connection-assumed', sys-iface-state: 'external') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6840] device (patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab): Activation: successful, device activated. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6844] policy: auto-activating connection 'ovs-if-phys0' (c5e9de2a-1ee5-4c3e-801c-4009076b6ab4) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6845] device (br-ex): state change: deactivating -> disconnected (reason 'removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6852] device (eno12399): Activation: starting connection 'ovs-if-phys0' (c5e9de2a-1ee5-4c3e-801c-4009076b6ab4) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6853] device (br-ex): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6855] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6856] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6857] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6860] policy: auto-activating connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6861] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6864] device (eno12399): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6866] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6867] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6869] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (aa79fb48-31f3-48d3-9929-9b8ddeeff9b8) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6871] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6874] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6876] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6877] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00066|bridge|INFO|bridge br-ex: added interface eno12399 on port 1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00067|bridge|INFO|bridge br-ex: using datapath ID 0000b49691c8a630 Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00068|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6880] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6904] device (br-ex): state change: activated -> deactivating (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5328]: Error: Device '' not found. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6911] device (eno12399): state change: activated -> deactivating (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6913] device (eno12399): released from master device br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6917] device (br-ex): state change: activated -> deactivating (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6919] device (br-ex): released from master device br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6923] device (eno12399): state change: ip-check -> deactivating (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6924] device (eno12399): releasing ovs interface eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6925] device (eno12399): released from master device eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6929] device (br-ex): state change: ip-config -> deactivating (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6930] device (br-ex): releasing ovs interface br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6931] device (br-ex): released from master device br-ex Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6936] audit: op="connections-reload" pid=5334 uid=0 result="success" Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6938] device (br-ex): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6942] device (eno12399): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + sleep 10 Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6946] device (br-ex): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6948] device (eno12399): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00069|netdev|WARN|failed to set MTU for network device br-ex: No such device Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6959] device (br-ex): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00070|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device br-ex entered promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6963] device (eno12399): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6965] device (br-ex): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6967] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6972] device (eno12399): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6972] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6974] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd-udevd[5350]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd-udevd[5350]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6978] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.6984] dhcp4 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00071|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 1 Jan 23 16:13:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00072|bridge|INFO|bridge br-ex: deleted interface br-ex on port 65534 Jan 23 16:13:08 hub-master-0.workload.bos2.lab chronyd[2922]: Can't synchronise: no selectable sources Jan 23 16:13:08 hub-master-0.workload.bos2.lab chronyd[2922]: Source 192.168.18.9 offline Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5361]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5362]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5362]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5362]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5362]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5481]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5486]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5486]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5486]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5486]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab kernel: device br-ex left promiscuous mode Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + INTERFACE_NAME=eno12399 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + OPERATION=pre-up Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5542]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5543]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.7802] device (br-ex): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.7806] device (br-ex): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-b00d55a7b36a356a0aa7895eeaa23f884281dad693d60a4f489fb0ee30878c8a-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-b00d55a7b36a356a0aa7895eeaa23f884281dad693d60a4f489fb0ee30878c8a-merged.mount has successfully entered the 'dead' state. Jan 23 16:13:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-b00d55a7b36a356a0aa7895eeaa23f884281dad693d60a4f489fb0ee30878c8a-merged.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-b00d55a7b36a356a0aa7895eeaa23f884281dad693d60a4f489fb0ee30878c8a-merged.mount completed and consumed the indicated resources. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.7888] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490388.7912] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + INTERFACE_CONNECTION_UUID=99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + '[' 99853833-baac-4bca-8508-0bff9efdaf37 == '' ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5549]: ++ nmcli -t -f connection.slave-type conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5550]: ++ awk -F : '{print $NF}' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + INTERFACE_OVS_SLAVE_TYPE= Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + '[' '' '!=' ovs-port ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5540]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: + [[ br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5564]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5565]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5565]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5565]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5565]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5571]: Error: Device 'br-ex' not found. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: + [[ ovs-port-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5585]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5586]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5586]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5586]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5586]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: + [[ ovs-port-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5606]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5607]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5607]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5607]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5607]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5613]: Error: Device 'br-ex' not found. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5627]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5628]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5628]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5628]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5628]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: Not a DHCP4 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5648]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5649]: + '[' -z ']' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5649]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5649]: Not a DHCP6 address. Ignoring. Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5649]: + exit 0 Jan 23 16:13:08 hub-master-0.workload.bos2.lab nm-dispatcher[5655]: Error: Device 'br-ex' not found. Jan 23 16:13:09 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:13:09 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:13:09 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 12ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:13:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490390.1725] dhcp6 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490390.1736] dhcp6 (eno12399): state changed new lease, address=2600:52:7:18::12 Jan 23 16:13:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490390.7015] dhcp4 (eno12399): state changed new lease, address=192.168.18.12 Jan 23 16:13:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490390.7017] policy: set 'Wired Connection' (eno12399) as default for IPv4 routing and DNS Jan 23 16:13:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490390.7047] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:10 hub-master-0.workload.bos2.lab nm-dispatcher[5685]: NM resolv-prepender triggered by eno12399 dhcp4-change. Jan 23 16:13:10 hub-master-0.workload.bos2.lab nm-dispatcher[5686]: nameserver 2600:52:7:18::9 Jan 23 16:13:10 hub-master-0.workload.bos2.lab nm-dispatcher[5686]: nameserver 192.168.18.9 Jan 23 16:13:10 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:10 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope. -- Subject: Unit libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d. -- Subject: Unit libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="retrieved Address map map[0xc000036fc0:[127.0.0.1/8 lo ::1/128] 0xc000037200:[192.168.18.12/25 eno12399 2600:52:7:18::12/128]]" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Retrieved route map map[3:[{Ifindex: 3 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Checking whether address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=debug msg="Address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5723]: time="2023-01-23T16:13:11Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has successfully entered the 'dead' state. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope completed and consumed the indicated resources. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope has successfully entered the 'dead' state. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope: Consumed 110ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-cf0dcd707f8d025d1304eb9191eccd3333292c9c076a2f2ad3742403cad0347d.scope completed and consumed the indicated resources. Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5809]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5685]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.5656] audit: op="reload" arg="2" pid=5819 uid=0 result="success" Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.5656] config: signal: DNS_RC Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5824]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5824]: + [[ Wired Connection == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5824]: + echo 'Refusing to modify default connection.' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5824]: Refusing to modify default connection. Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5824]: + exit 0 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5825]: + '[' -z ']' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5825]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5825]: Not a DHCP6 address. Ignoring. Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5825]: + exit 0 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + INTERFACE_NAME=eno12399 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + OPERATION=pre-up Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5834]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5835]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + INTERFACE_CONNECTION_UUID=99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + '[' 99853833-baac-4bca-8508-0bff9efdaf37 == '' ']' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5840]: ++ nmcli -t -f connection.slave-type conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5841]: ++ awk -F : '{print $NF}' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + INTERFACE_OVS_SLAVE_TYPE= Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + '[' '' '!=' ovs-port ']' Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5832]: + exit 0 Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.6179] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.6180] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.6181] manager: NetworkManager state is now CONNECTED_SITE Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.6183] device (eno12399): Activation: successful, device activated. Jan 23 16:13:11 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490391.6185] manager: NetworkManager state is now CONNECTED_GLOBAL Jan 23 16:13:11 hub-master-0.workload.bos2.lab chronyd[2922]: Source 192.168.18.9 online Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5891]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5896]: nameserver 2600:52:7:18::9 Jan 23 16:13:11 hub-master-0.workload.bos2.lab nm-dispatcher[5896]: nameserver 192.168.18.9 Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:11 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope. -- Subject: Unit libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c. -- Subject: Unit libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="retrieved Address map map[0xc000324480:[127.0.0.1/8 lo ::1/128] 0xc0003246c0:[192.168.18.12/25 eno12399 2600:52:7:18::12/128]]" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Retrieved route map map[3:[{Ifindex: 3 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Checking whether address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=debug msg="Address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5930]: time="2023-01-23T16:13:12Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has successfully entered the 'dead' state. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope: Consumed 49ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope completed and consumed the indicated resources. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope has successfully entered the 'dead' state. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope: Consumed 109ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-2aeaa8267364b309ee0478536417819a92339a6cd9271080b3ae87bd58562a2c.scope completed and consumed the indicated resources. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6017]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[5891]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:13:12 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490392.4988] audit: op="reload" arg="2" pid=6027 uid=0 result="success" Jan 23 16:13:12 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490392.4989] config: signal: DNS_RC Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6032]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6032]: + [[ Wired Connection == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6032]: + echo 'Refusing to modify default connection.' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6032]: Refusing to modify default connection. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6032]: + exit 0 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + '[' -z 2600:52:7:18::12 ']' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6035]: ++ ip -j -6 a show eno12399 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6036]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .preferred_life_time' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + LEASE_TIME=43199 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6038]: ++ ip -j -6 a show eno12399 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6039]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .prefixlen' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + PREFIX_LEN=128 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + '[' 43199 -lt 4294967295 ']' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + echo 'Not an infinite DHCP6 lease. Ignoring.' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: Not an infinite DHCP6 lease. Ignoring. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6033]: + exit 0 Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: + '[' -z ']' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: Not a DHCP4 address. Ignoring. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6062]: + exit 0 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6063]: + '[' -z ']' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6063]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6063]: Not a DHCP6 address. Ignoring. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6063]: + exit 0 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6066]: Error: Device '' not found. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6077]: NM resolv-prepender triggered by eno12399 dhcp6-change. Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6082]: nameserver 2600:52:7:18::9 Jan 23 16:13:12 hub-master-0.workload.bos2.lab nm-dispatcher[6082]: nameserver 192.168.18.9 Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope. -- Subject: Unit libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292. -- Subject: Unit libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:12 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="retrieved Address map map[0xc0001c78c0:[127.0.0.1/8 lo ::1/128] 0xc0001c7b00:[192.168.18.12/25 eno12399 2600:52:7:18::12/128]]" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Ignoring filtered route {Ifindex: 3 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Retrieved route map map[3:[{Ifindex: 3 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}] 7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Checking whether address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=debug msg="Address 192.168.18.12/25 eno12399 contains VIP 192.168.18.7" Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6118]: time="2023-01-23T16:13:13Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has successfully entered the 'dead' state. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope completed and consumed the indicated resources. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope has successfully entered the 'dead' state. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope: Consumed 92ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292.scope completed and consumed the indicated resources. Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6196]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6077]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:13:13 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490393.4588] audit: op="reload" arg="2" pid=6206 uid=0 result="success" Jan 23 16:13:13 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490393.4588] config: signal: DNS_RC Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6211]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6211]: + [[ Wired Connection == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6211]: + echo 'Refusing to modify default connection.' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6211]: Refusing to modify default connection. Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6211]: + exit 0 Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + '[' -z 2600:52:7:18::12 ']' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6214]: ++ ip -j -6 a show eno12399 Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6215]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .preferred_life_time' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + LEASE_TIME=43198 Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6217]: ++ ip -j -6 a show eno12399 Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6218]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .prefixlen' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + PREFIX_LEN=128 Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + '[' 43198 -lt 4294967295 ']' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + echo 'Not an infinite DHCP6 lease. Ignoring.' Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: Not an infinite DHCP6 lease. Ignoring. Jan 23 16:13:13 hub-master-0.workload.bos2.lab nm-dispatcher[6212]: + exit 0 Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 12ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-8798ac1e14aae1b2ff657d4814046cc8cff3ea694101458c5e8a32bae8947292-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:13:13 hub-master-0.workload.bos2.lab chronyd[2922]: Selected source 192.168.18.9 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for dev in $@ Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6235]: ++ nmcli -g GENERAL.STATE device show eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local 'connected_state=100 (connected)' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ 100 (connected) =~ disconnected ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Waiting for interface eno12399 to activate...' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Waiting for interface eno12399 to activate... Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + timeout 60 bash -c 'while ! nmcli -g DEVICE,STATE c | grep "eno12399:activated"; do sleep 5; done' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6243]: eno12399:activated Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + print_state Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Current device, connection, interface and routing state:' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Current device, connection, interface and routing state: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6247]: + nmcli -g all device Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: + grep -v unmanaged Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: eno12399:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/10:Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:/org/freedesktop/NetworkManager/ActiveConnection/11 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/15::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: eno12409:ethernet:connecting (getting IP configuration):none:none:/org/freedesktop/NetworkManager/Devices/4:Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:/org/freedesktop/NetworkManager/ActiveConnection/2 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: eno8303:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/5::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: eno8403:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/6::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: ens2f0:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/7::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6248]: ens2f1:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/8::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli -g all connection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6252]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674490391:Mon Jan 23 16\:13\:11 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/11::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6252]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674490391:Mon Jan 23 16\:13\:11 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:eno12409:activating:/org/freedesktop/NetworkManager/ActiveConnection/2::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6252]: Wired Connection:8105e4a7-d75c-4c11-b250-7d472ed203fe:802-3-ethernet:0:never:yes:0:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -d address show Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet 127.0.0.1/8 scope host lo Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet6 ::1/128 scope host Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 2: eno8303: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether b0:7b:25:de:1a:bc brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 3: eno12399: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet 192.168.18.12/25 brd 192.168.18.127 scope global dynamic noprefixroute eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft 86393sec preferred_lft 86393sec Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet6 2600:52:7:18::12/128 scope global dynamic noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft 43193sec preferred_lft 43193sec Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet6 fe80::b696:91ff:fec8:a630/64 scope link noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 4: ens2f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether 04:3f:72:fe:d9:b8 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 5: eno8403: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether b0:7b:25:de:1a:bd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 6: ens2f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether 04:3f:72:fe:d9:b9 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 7: eno12409: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether b4:96:91:c8:a6:31 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet6 fe80::b696:91ff:fec8:a631/64 scope link noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 8: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether e6:05:44:0a:7c:b5 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 10: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether c6:76:de:0c:d9:da brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: inet6 fe80::c476:deff:fe0c:d9da/64 scope link Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 11: ovn-k8s-mp0: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether 12:16:15:ff:96:b9 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: 12: br-int: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: link/ether ba:22:7f:9b:cf:d8 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6256]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip route show Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6257]: default via 192.168.18.1 dev eno12399 proto dhcp src 192.168.18.12 metric 102 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6257]: 192.168.18.0/25 dev eno12399 proto kernel scope link src 192.168.18.12 metric 102 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -6 route show Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: ::1 dev lo proto kernel metric 256 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: 2600:52:7:18::12 dev eno12399 proto kernel metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: 2600:52:7:18::/64 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: 2600:52:7:18::/64 dev eno12399 proto ra metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: fe80::/64 dev eno12409 proto kernel metric 1024 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: fe80::/64 dev eno12399 proto kernel metric 1024 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: default via fe80::1532:4e62:7604:4733 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6258]: default via fe80::1532:4e62:7604:4733 dev eno12399 proto ra metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + touch /run/configure-ovs-boot-done Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ get_nodeip_interface /var/lib/ovnk/iface_default_hint /etc/ovnk/extra_bridge /run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local iface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local counter=0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local extra_bridge_file=/etc/ovnk/extra_bridge Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local ip_hint_file=/run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ local extra_bridge= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ get_nodeip_hint_interface /run/nodeip-configuration/primary-ip '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ local ip_hint= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ local ip_hint_file=/run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ local extra_bridge= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ local iface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6262]: ++++ get_ip_from_ip_hint_file /run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6262]: ++++ local ip_hint_file=/run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6262]: ++++ [[ ! -f /run/nodeip-configuration/primary-ip ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6263]: +++++ cat /run/nodeip-configuration/primary-ip Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6262]: ++++ ip_hint=192.168.18.12 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6262]: ++++ echo 192.168.18.12 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ ip_hint=192.168.18.12 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ [[ -z 192.168.18.12 ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6266]: ++++ ip -j addr Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6267]: ++++ jq -r 'first(.[] | select(any(.addr_info[]; .local=="192.168.18.12") and .ifname!="br-ex1" and .ifname!="")) | .ifname' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ iface=eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ [[ -n eno12399 ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6261]: +++ echo eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ iface=eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ [[ -n eno12399 ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ echo eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6260]: ++ return Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface=eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' eno12399 '!=' br-ex ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6268]: ++ nmcli connection show --active br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -z '' ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Bridge br-ex is not active, restoring previous configuration before proceeding...' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Bridge br-ex is not active, restoring previous configuration before proceeding... Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rollback_nm Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6273]: ++ get_bridge_physical_interface ovs-if-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6273]: ++ local bridge_interface=ovs-if-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6273]: ++ local physical_interface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6274]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6274]: +++ echo '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6273]: ++ physical_interface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6273]: ++ echo '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + phys0= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6279]: ++ get_bridge_physical_interface ovs-if-phys1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6279]: ++ local bridge_interface=ovs-if-phys1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6279]: ++ local physical_interface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6280]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6280]: +++ echo '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6279]: ++ physical_interface= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6279]: ++ echo '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + phys1= Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + remove_all_ovn_bridges Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Reverting any previous OVS configuration' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Reverting any previous OVS configuration Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + remove_ovn_bridges br-ex phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_name=br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + port_name=phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + update_nm_conn_files br-ex phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_name=br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + port_name=phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_port=ovs-port-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_interface=ovs-if-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + default_port_name=ovs-port-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_interface_name=ovs-if-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES=($(echo "${NM_CONN_PATH}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Jan 23 16:13:18 hub-master-0.workload.bos2.lab ovs-vsctl[6286]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6285]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -s nullglob Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES+=(${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}) Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -u nullglob Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + rm_nm_conn_files Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -d /sys/class/net/br-ex1 ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'OVS configuration successfully reverted' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: OVS configuration successfully reverted Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + reload_profiles_nm '' '' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 0 -eq 0 ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + return Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + print_state Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Current device, connection, interface and routing state:' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Current device, connection, interface and routing state: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6287]: + nmcli -g all device Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: + grep -v unmanaged Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: eno12399:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/10:Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:/org/freedesktop/NetworkManager/ActiveConnection/11 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/15::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: eno12409:ethernet:connecting (getting IP configuration):none:none:/org/freedesktop/NetworkManager/Devices/4:Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:/org/freedesktop/NetworkManager/ActiveConnection/2 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: eno8303:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/5::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: eno8403:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/6::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: ens2f0:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/7::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6288]: ens2f1:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/8::: Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli -g all connection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6292]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674490391:Mon Jan 23 16\:13\:11 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/11::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6292]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674490391:Mon Jan 23 16\:13\:11 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:eno12409:activating:/org/freedesktop/NetworkManager/ActiveConnection/2::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6292]: Wired Connection:8105e4a7-d75c-4c11-b250-7d472ed203fe:802-3-ethernet:0:never:yes:0:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -d address show Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet 127.0.0.1/8 scope host lo Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet6 ::1/128 scope host Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 2: eno8303: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether b0:7b:25:de:1a:bc brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 3: eno12399: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet 192.168.18.12/25 brd 192.168.18.127 scope global dynamic noprefixroute eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft 86393sec preferred_lft 86393sec Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet6 2600:52:7:18::12/128 scope global dynamic noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft 43193sec preferred_lft 43193sec Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet6 fe80::b696:91ff:fec8:a630/64 scope link noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 4: ens2f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether 04:3f:72:fe:d9:b8 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 5: eno8403: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether b0:7b:25:de:1a:bd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 6: ens2f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether 04:3f:72:fe:d9:b9 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 7: eno12409: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether b4:96:91:c8:a6:31 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet6 fe80::b696:91ff:fec8:a631/64 scope link noprefixroute Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 8: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether e6:05:44:0a:7c:b5 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 10: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether c6:76:de:0c:d9:da brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: inet6 fe80::c476:deff:fe0c:d9da/64 scope link Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: valid_lft forever preferred_lft forever Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 11: ovn-k8s-mp0: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether 12:16:15:ff:96:b9 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: 12: br-int: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: link/ether ba:22:7f:9b:cf:d8 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6296]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip route show Jan 23 16:13:18 hub-master-0.workload.bos2.lab ovs-vsctl[6313]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6297]: default via 192.168.18.1 dev eno12399 proto dhcp src 192.168.18.12 metric 102 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6297]: 192.168.18.0/25 dev eno12399 proto kernel scope link src 192.168.18.12 metric 102 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -6 route show Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9049] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/24) Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: ::1 dev lo proto kernel metric 256 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: 2600:52:7:18::12 dev eno12399 proto kernel metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: 2600:52:7:18::/64 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: 2600:52:7:18::/64 dev eno12399 proto ra metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: fe80::/64 dev eno12409 proto kernel metric 1024 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: fe80::/64 dev eno12399 proto kernel metric 1024 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: default via fe80::1532:4e62:7604:4733 dev eno12409 proto ra metric 101 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6298]: default via fe80::1532:4e62:7604:4733 dev eno12399 proto ra metric 102 pref medium Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9050] audit: op="connection-add" uuid="69b0fd5a-9982-4dfb-a0ff-9478dcfb5700" name="br-ex" pid=6314 uid=0 result="success" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + convert_to_bridge eno12399 br-ex phys0 48 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local iface=eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local bridge_name=br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local port_name=phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local bridge_metric=48 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local ovs_port=ovs-port-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local ovs_interface=ovs-if-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local default_port_name=ovs-port-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local bridge_interface_name=ovs-if-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' eno12399 = br-ex ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nm_config_changed=1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -z eno12399 ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface_mac=b4:96:91:c8:a6:30 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'MAC address found for iface: eno12399: b4:96:91:c8:a6:30' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: MAC address found for iface: eno12399: b4:96:91:c8:a6:30 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6301]: ++ ip link show eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6302]: ++ awk '{print $5; exit}' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface_mtu=1500 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ -z 1500 ]] Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'MTU found for iface: eno12399: 1500' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: MTU found for iface: eno12399: 1500 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6304]: ++ nmcli --fields UUID,DEVICE conn show --active Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6305]: ++ awk '/\seno12399\s*$/ {print $1}' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + old_conn=99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection show br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + add_nm_conn type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 1500 connection.autoconnect-slaves 1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c add type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 1500 connection.autoconnect-slaves 1 connection.autoconnect no Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6314]: Connection 'br-ex' (69b0fd5a-9982-4dfb-a0ff-9478dcfb5700) successfully added. Jan 23 16:13:18 hub-master-0.workload.bos2.lab ovs-vsctl[6322]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection show ovs-port-phys0 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex eno12399 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + add_nm_conn type ovs-port conn.interface eno12399 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c add type ovs-port conn.interface eno12399 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 connection.autoconnect no Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9410] manager: (eno12399): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25) Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9411] audit: op="connection-add" uuid="bac6281c-e524-4e3e-8259-abe05ad061e7" name="ovs-port-phys0" pid=6323 uid=0 result="success" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6323]: Connection 'ovs-port-phys0' (bac6281c-e524-4e3e-8259-abe05ad061e7) successfully added. Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection show ovs-port-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab ovs-vsctl[6331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + add_nm_conn type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c add type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex connection.autoconnect no Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9760] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26) Jan 23 16:13:18 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490398.9761] audit: op="connection-add" uuid="a6d2b9b5-7a03-4001-9502-ea4b59e4c55d" name="ovs-port-br-ex" pid=6332 uid=0 result="success" Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6332]: Connection 'ovs-port-br-ex' (a6d2b9b5-7a03-4001-9502-ea4b59e4c55d) successfully added. Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_phys_args=() Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6336]: ++ nmcli --get-values connection.type conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 802-3-ethernet == vlan ']' Jan 23 16:13:18 hub-master-0.workload.bos2.lab configure-ovs.sh[6340]: ++ nmcli --get-values connection.type conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 802-3-ethernet == bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6344]: ++ nmcli --get-values connection.type conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 802-3-ethernet == team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + iface_type=802-3-ethernet Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '!' '' = 0 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_phys_args+=(802-3-ethernet.cloned-mac-address "${iface_mac}") Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection show ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists destroy interface eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vsctl[6352]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + add_nm_conn type 802-3-ethernet conn.interface eno12399 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 1500 802-3-ethernet.cloned-mac-address b4:96:91:c8:a6:30 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c add type 802-3-ethernet conn.interface eno12399 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 1500 802-3-ethernet.cloned-mac-address b4:96:91:c8:a6:30 connection.autoconnect no Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.0522] audit: op="connection-add" uuid="e68e1ed9-b86e-4f80-9531-ca5523ce55b5" name="ovs-if-phys0" pid=6353 uid=0 result="success" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6353]: Connection 'ovs-if-phys0' (e68e1ed9-b86e-4f80-9531-ca5523ce55b5) successfully added. Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6357]: ++ nmcli -g connection.uuid conn show ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + new_conn=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6361]: ++ nmcli -g connection.uuid conn show ovs-port-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_port_conn=a6d2b9b5-7a03-4001-9502-ea4b59e4c55d Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + replace_connection_master 99853833-baac-4bca-8508-0bff9efdaf37 e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local old=99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local new=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6365]: ++ nmcli -g UUID connection show Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6369]: ++ nmcli -g connection.master connection show uuid 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6373]: ++ nmcli -g connection.master connection show uuid 8105e4a7-d75c-4c11-b250-7d472ed203fe Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6377]: ++ nmcli -g connection.master connection show uuid 69b0fd5a-9982-4dfb-a0ff-9478dcfb5700 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6381]: ++ nmcli -g connection.master connection show uuid e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6385]: ++ nmcli -g connection.master connection show uuid a6d2b9b5-7a03-4001-9502-ea4b59e4c55d Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' br-ex '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6389]: ++ nmcli -g connection.master connection show uuid bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' br-ex '!=' 99853833-baac-4bca-8508-0bff9efdaf37 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + replace_connection_master eno12399 e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local old=eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local new=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6393]: ++ nmcli -g UUID connection show Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6397]: ++ nmcli -g connection.master connection show uuid 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6401]: ++ nmcli -g connection.master connection show uuid 8105e4a7-d75c-4c11-b250-7d472ed203fe Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6405]: ++ nmcli -g connection.master connection show uuid 69b0fd5a-9982-4dfb-a0ff-9478dcfb5700 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6409]: ++ nmcli -g connection.master connection show uuid e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6413]: ++ nmcli -g connection.master connection show uuid a6d2b9b5-7a03-4001-9502-ea4b59e4c55d Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' br-ex '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn_uuid in $(nmcli -g UUID connection show) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6417]: ++ nmcli -g connection.master connection show uuid bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' br-ex '!=' eno12399 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + continue Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli connection show ovs-if-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vsctl[6425]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6426]: + nmcli --fields ipv4.method,ipv6.method conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6427]: + grep manual Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_if_brex_args= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6432]: ++ ip -j a show dev eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6433]: ++ jq '.[0].addr_info | map(. | select(.family == "inet")) | length' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + num_ipv4_addrs=1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 1 -gt 0 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_if_brex_args+='ipv4.may-fail no ' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6435]: ++ ip -j a show dev eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6436]: ++ jq '.[0].addr_info | map(. | select(.family == "inet6" and .scope != "link")) | length' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + num_ip6_addrs=1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 1 -gt 0 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_if_brex_args+='ipv6.may-fail no ' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6437]: ++ nmcli --get-values ipv4.dhcp-client-id conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + dhcp_client_id= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -n '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6441]: ++ nmcli --get-values ipv6.dhcp-duid conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + dhcp6_client_id= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -n '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6445]: ++ nmcli --get-values ipv6.addr-gen-mode conn show 99853833-baac-4bca-8508-0bff9efdaf37 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ipv6_addr_gen_mode=eui64 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -n eui64 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + extra_if_brex_args+='ipv6.addr-gen-mode eui64 ' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + add_nm_conn type ovs-interface slave-type ovs-port conn.interface br-ex master a6d2b9b5-7a03-4001-9502-ea4b59e4c55d con-name ovs-if-br-ex 802-3-ethernet.mtu 1500 802-3-ethernet.cloned-mac-address b4:96:91:c8:a6:30 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.may-fail no ipv6.addr-gen-mode eui64 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c add type ovs-interface slave-type ovs-port conn.interface br-ex master a6d2b9b5-7a03-4001-9502-ea4b59e4c55d con-name ovs-if-br-ex 802-3-ethernet.mtu 1500 802-3-ethernet.cloned-mac-address b4:96:91:c8:a6:30 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.may-fail no ipv6.addr-gen-mode eui64 connection.autoconnect no Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.4041] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.4042] audit: op="connection-add" uuid="94338756-3372-4447-bf85-a1e57729e56c" name="ovs-if-br-ex" pid=6449 uid=0 result="success" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6449]: Connection 'ovs-if-br-ex' (94338756-3372-4447-bf85-a1e57729e56c) successfully added. Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + configure_driver_options eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + intf=eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '!' -f /sys/class/net/eno12399/device/uevent ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6454]: ++ cat /sys/class/net/eno12399/device/uevent Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6455]: ++ grep DRIVER Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6456]: ++ awk -F = '{print $2}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + driver=ice Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Driver name is' ice Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Driver name is ice Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ice = vmxnet3 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + update_nm_conn_files br-ex phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_name=br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + port_name=phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_port=ovs-port-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs_interface=ovs-if-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + default_port_name=ovs-port-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + bridge_interface_name=ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES=($(echo "${NM_CONN_PATH}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6457]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -s nullglob Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + MANAGED_NM_CONN_FILES+=(${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${NM_CONN_PATH}/*${MANAGED_NM_CONN_SUFFIX}) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + shopt -u nullglob Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '!' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6458]: + nmcli connection show br-ex1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6458]: + nmcli connection show ovs-if-phys1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ovs-vsctl --timeout=30 --if-exists del-br br0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vsctl[6467]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + connections=(br-ex ovs-if-phys0) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6468]: ++ nmcli -g NAME c Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ Wired Connection == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ Wired Connection == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ ovs-if-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ ovs-if-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ ovs-port-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ ovs-port-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + IFS= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + read -r connection Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + connections+=(ovs-if-br-ex) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' -f /etc/ovnk/extra_bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + activate_nm_connections br-ex ovs-if-phys0 ovs-if-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + connections=("$@") Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local connections Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6473]: ++ nmcli -g connection.slave-type connection show br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' = team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' = bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6477]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type=ovs-port Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6481]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type=ovs-port Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + declare -A master_interfaces Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6485]: ++ nmcli -g connection.slave-type connection show br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local is_slave=false Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' = team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' = bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local master_interface Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6489]: ++ nmcli -g GENERAL.STATE conn show br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local active_state= Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' == activated ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for i in {1..10} Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Attempt 1 to bring up connection br-ex' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Attempt 1 to bring up connection br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli conn up br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5381] agent-manager: agent[1d3d6e22f255af05,:1.190/nmcli-connect/0]: agent registered Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5389] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5392] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5394] device (br-ex): Activation: starting connection 'br-ex' (69b0fd5a-9982-4dfb-a0ff-9478dcfb5700) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5394] audit: op="connection-activate" uuid="69b0fd5a-9982-4dfb-a0ff-9478dcfb5700" name="br-ex" pid=6493 uid=0 result="success" Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5395] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5397] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5399] device (br-ex): Activation: starting connection 'ovs-port-br-ex' (a6d2b9b5-7a03-4001-9502-ea4b59e4c55d) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5400] device (eno12399): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5402] device (eno12399): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5404] device (eno12399): Activation: starting connection 'ovs-port-phys0' (bac6281c-e524-4e3e-8259-abe05ad061e7) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5404] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5406] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5406] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5407] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5409] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5410] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5410] device (br-ex): Activation: connection 'ovs-port-br-ex' enslaved, continuing activation Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5412] device (eno12399): disconnecting for new activation request. Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5412] device (eno12399): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5413] manager: NetworkManager state is now CONNECTING Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5418] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5419] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5420] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5420] device (eno12399): Activation: connection 'ovs-port-phys0' enslaved, continuing activation Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5421] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5423] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5425] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5427] device (eno12399): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5531] dhcp4 (eno12399): canceled DHCP transaction Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5531] dhcp4 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5531] dhcp4 (eno12399): state changed no lease Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5532] dhcp6 (eno12399): canceled DHCP transaction Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5532] dhcp6 (eno12399): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.5533] dhcp6 (eno12399): state changed no lease Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: + '[' -z ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: Not a DHCP4 address. Ignoring. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6506]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6507]: + '[' -z ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6507]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6507]: Not a DHCP6 address. Ignoring. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6507]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6030] device (eno12399): Activation: starting connection 'ovs-if-phys0' (e68e1ed9-b86e-4f80-9531-ca5523ce55b5) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6055] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6058] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6061] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6064] device (eno12399): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6065] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00073|bridge|INFO|bridge br-ex: added interface eno12399 on port 1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00074|bridge|INFO|bridge br-ex: using datapath ID 0000b49691c8a630 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00075|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6510]: Error: Device '' not found. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + INTERFACE_NAME=br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + OPERATION=pre-up Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6521]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6520]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6180] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + INTERFACE_CONNECTION_UUID= Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + '[' '' == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6517]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6302] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6303] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.6305] device (br-ex): Activation: successful, device activated. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + INTERFACE_NAME=eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + OPERATION=pre-up Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6529]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6528]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + INTERFACE_CONNECTION_UUID=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' e68e1ed9-b86e-4f80-9531-ca5523ce55b5 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6536]: ++ nmcli -t -f connection.slave-type conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6537]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6542]: ++ nmcli -t -f connection.master conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6543]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + PORT=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6548]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6549]: ++ awk -F : '{if( ($1=="bac6281c-e524-4e3e-8259-abe05ad061e7" || $3=="bac6281c-e524-4e3e-8259-abe05ad061e7") && $2~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + PORT_CONNECTION_UUID=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6554]: ++ nmcli -t -f connection.slave-type conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6555]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6560]: ++ nmcli -t -f connection.master conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6561]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + BRIDGE_NAME=br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' br-ex '!=' br-ex ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + ovs-vsctl list interface eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + declare -A INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + cat /run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6566]: declare -A INTERFACES=([eno12399]="3" ) Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + source /run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: ++ INTERFACES=([eno12399]="3") Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: ++ declare -A INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + '[' a ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vsctl[6567]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00076|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 1 Jan 23 16:13:19 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:13:19 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00077|bridge|INFO|bridge br-ex: added interface eno12399 on port 3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6525]: + declare -p INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7333] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7335] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7337] device (eno12399): Activation: successful, device activated. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + INTERFACE_NAME=br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + OPERATION=pre-up Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6574]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6575]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7361] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + INTERFACE_CONNECTION_UUID= Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + '[' '' == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6570]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7513] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7514] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7516] device (br-ex): Activation: successful, device activated. Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6493]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/12) Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + s=0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + break Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 0 -eq 0 ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Brought up connection br-ex successfully' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Brought up connection br-ex successfully Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c mod br-ex connection.autoconnect yes Jan 23 16:13:19 hub-master-0.workload.bos2.lab chronyd[2922]: Can't synchronise: no selectable sources Jan 23 16:13:19 hub-master-0.workload.bos2.lab chronyd[2922]: Source 192.168.18.9 offline Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.7689] audit: op="connection-update" uuid="69b0fd5a-9982-4dfb-a0ff-9478dcfb5700" name="br-ex" args="connection.autoconnect,connection.timestamp" pid=6583 uid=0 result="success" Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6599]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6599]: + [[ Wired Connection == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6599]: + echo 'Refusing to modify default connection.' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6599]: Refusing to modify default connection. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6599]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6600]: + '[' -z ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6600]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6600]: Not a DHCP6 address. Ignoring. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6600]: + exit 0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6601]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type=ovs-port Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local is_slave=false Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = team ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = bond ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local master_interface Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[6615]: ++ nmcli -g GENERAL.STATE conn show ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + INTERFACE_NAME=eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + OPERATION=pre-up Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' pre-up '!=' pre-up ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6621]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6622]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local active_state=activating Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' activating == activated ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for i in {1..10} Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Attempt 1 to bring up connection ovs-if-phys0' Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Attempt 1 to bring up connection ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli conn up ovs-if-phys0 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + INTERFACE_CONNECTION_UUID=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' e68e1ed9-b86e-4f80-9531-ca5523ce55b5 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6631]: ++ nmcli -t -f connection.slave-type conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6632]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8165] agent-manager: agent[570311f7ab4c6ec3,:1.205/nmcli-connect/0]: agent registered Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8178] device (eno12399): state change: ip-check -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8180] device (eno12399): releasing ovs interface eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8180] device (eno12399): released from master device eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8183] device (eno12399): disconnecting for new activation request. Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8183] audit: op="connection-activate" uuid="e68e1ed9-b86e-4f80-9531-ca5523ce55b5" name="ovs-if-phys0" pid=6626 uid=0 result="success" Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00078|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8196] device (eno12399): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8204] device (eno12399): Activation: starting connection 'ovs-if-phys0' (e68e1ed9-b86e-4f80-9531-ca5523ce55b5) Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8206] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8207] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8210] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8212] device (eno12399): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.8214] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6638]: ++ nmcli -t -f connection.master conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6639]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + PORT=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6644]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6645]: ++ awk -F : '{if( ($1=="bac6281c-e524-4e3e-8259-abe05ad061e7" || $3=="bac6281c-e524-4e3e-8259-abe05ad061e7") && $2~/^ovs*/) print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + PORT_CONNECTION_UUID=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6692]: ++ nmcli -t -f connection.slave-type conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6693]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6729]: ++ nmcli -t -f connection.master conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6730]: ++ awk -F : '{print $NF}' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + BRIDGE_NAME=br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' br-ex '!=' br-ex ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + ovs-vsctl list interface eno12399 Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + declare -A INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + cat /run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6782]: declare -A INTERFACES=([eno12399]="3" ) Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + source /run/ofport_requests.br-ex Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: ++ INTERFACES=([eno12399]="3") Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: ++ declare -A INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + '[' a ']' Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vsctl[6784]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:13:19 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00079|bridge|INFO|bridge br-ex: added interface eno12399 on port 3 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00080|bridge|INFO|bridge br-ex: using datapath ID 0000b49691c8a630 Jan 23 16:13:19 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00081|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.9092] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6619]: + declare -p INTERFACES Jan 23 16:13:19 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490399.9129] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6824]: NM resolv-prepender triggered by br-ex up. Jan 23 16:13:19 hub-master-0.workload.bos2.lab nm-dispatcher[6825]: nameserver 2600:52:7:18::9 Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope. -- Subject: Unit libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910. -- Subject: Unit libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="retrieved Address map map[0xc000338ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="retrieved Address map map[0xc0003750e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:20 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:20Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:13:20 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 11ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="retrieved Address map map[0xc0005c8000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="retrieved Address map map[0xc0005c8d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:21 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:21Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="retrieved Address map map[0xc0005c9b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="retrieved Address map map[0xc0005f2900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:22 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:22Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="retrieved Address map map[0xc0005f3680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="retrieved Address map map[0xc000526480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:23 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:23Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="retrieved Address map map[0xc000527200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="retrieved Address map map[0xc000526120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:24 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:24Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="retrieved Address map map[0xc0002a3e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:25 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:25Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="retrieved Address map map[0xc0002c4240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="retrieved Address map map[0xc0002c4fc0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:26 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:26Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="retrieved Address map map[0xc0002c5d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="retrieved Address map map[0xc000344240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:27 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:27Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="retrieved Address map map[0xc000526ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="retrieved Address map map[0xc0004d8900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:28 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:28Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="retrieved Address map map[0xc000375680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="retrieved Address map map[0xc00037a7e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:29 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:29Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="retrieved Address map map[0xc00037b560:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="retrieved Address map map[0xc000524360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:30 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:30Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="retrieved Address map map[0xc0004d9680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="retrieved Address map map[0xc0004fc480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:31 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:31Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="retrieved Address map map[0xc0005250e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="retrieved Address map map[0xc000525e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:32 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:32Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="retrieved Address map map[0xc0005ecc60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="retrieved Address map map[0xc0005ed9e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:33 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:33Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="retrieved Address map map[0xc0009246c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="retrieved Address map map[0xc000925440:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:34 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:34Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="retrieved Address map map[0xc0002a3e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:35 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:35Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="retrieved Address map map[0xc000758360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="retrieved Address map map[0xc0007590e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:36 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:36Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="retrieved Address map map[0xc000759e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="retrieved Address map map[0xc000344240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:37 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:37Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1302] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1306] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1307] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1572] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1573] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1580] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1583] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1583] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1584] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1587] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490418.1591] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="retrieved Address map map[0xc0002c4240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Retrieved route map map[]" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="retrieved Address map map[0xc0002c4ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=debug msg="Retrieved route map map[]" Jan 23 16:13:38 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:38Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="retrieved Address map map[0xc000375680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Retrieved route map map[]" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="retrieved Address map map[0xc0005ec6c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=debug msg="Retrieved route map map[]" Jan 23 16:13:39 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:39Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490420.0574] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:13:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490420.0576] policy: set 'Wired Connection' (eno12409) as default for IPv6 routing and DNS Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="retrieved Address map map[0xc0002c5b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="retrieved Address map map[0xc00037aa20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:40 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:40Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="retrieved Address map map[0xc0005ed440:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="retrieved Address map map[0xc000466900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:41 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:41Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="retrieved Address map map[0xc000467680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="retrieved Address map map[0xc0004f8480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:42 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:42Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="retrieved Address map map[0xc00037b7a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="retrieved Address map map[0xc0003fc5a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:43 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:43Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="retrieved Address map map[0xc0004f9200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="retrieved Address map map[0xc0005f2000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:44 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:44Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="retrieved Address map map[0xc0005f2d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="retrieved Address map map[0xc0005f3b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:45 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:45Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="retrieved Address map map[0xc0008b2000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="retrieved Address map map[0xc0008b2ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:46 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:46Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="retrieved Address map map[0xc0002a3e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:47 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:47Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="retrieved Address map map[0xc0008b3c20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="retrieved Address map map[0xc00036c7e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:48 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:48Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="retrieved Address map map[0xc00032f560:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="retrieved Address map map[0xc0002c4c60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6859]: time="2023-01-23T16:13:49Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6822]: NM resolv-prepender: Timeout occurred Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[5168]: req:29 'up' [br-ex], "/etc/NetworkManager/dispatcher.d/30-resolv-prepender": complete: failed with Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: + [[ ovs-port-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: + '[' -z ']' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: Not a DHCP4 address. Ignoring. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6939]: + exit 0 Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6945]: + '[' -z ']' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6945]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6945]: Not a DHCP6 address. Ignoring. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6945]: + exit 0 Jan 23 16:13:49 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490429.9605] dispatcher: (47) /etc/NetworkManager/dispatcher.d/30-resolv-prepender failed (failed): Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: + '[' -z ']' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: Not a DHCP4 address. Ignoring. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6968]: + exit 0 Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6969]: + '[' -z ']' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6969]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6969]: Not a DHCP6 address. Ignoring. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6969]: + exit 0 Jan 23 16:13:49 hub-master-0.workload.bos2.lab systemd[1]: libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has successfully entered the 'dead' state. Jan 23 16:13:49 hub-master-0.workload.bos2.lab systemd[1]: libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope: Consumed 99ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope completed and consumed the indicated resources. Jan 23 16:13:49 hub-master-0.workload.bos2.lab nm-dispatcher[6972]: Error: Device '' not found. Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7013]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7014]: nameserver 2600:52:7:18::9 Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-4ad712800c86c0f17a3fd9d3af42055f621decea4437c5d92d538896da29c09e-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4ad712800c86c0f17a3fd9d3af42055f621decea4437c5d92d538896da29c09e-merged.mount has successfully entered the 'dead' state. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope has successfully entered the 'dead' state. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope: Consumed 108ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-568240c903a02f7288c70c1036d7e6684ffa662c30b12e83b833269119f9a910.scope completed and consumed the indicated resources. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope. -- Subject: Unit libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784. -- Subject: Unit libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:13:50 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="retrieved Address map map[0xc0001b6a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="retrieved Address map map[0xc0001b77a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:50 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:50Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="retrieved Address map map[0xc00063a000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="retrieved Address map map[0xc00063ad80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:51 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:51Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="retrieved Address map map[0xc0003da5a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="retrieved Address map map[0xc0003db320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:52 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:52Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="retrieved Address map map[0xc00063bb00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="retrieved Address map map[0xc000666900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:53 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:53Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="retrieved Address map map[0xc0007be120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="retrieved Address map map[0xc0007beea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:54 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:54Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="retrieved Address map map[0xc0007bfc20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="retrieved Address map map[0xc0005125a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:55 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:55Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="retrieved Address map map[0xc000882480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="retrieved Address map map[0xc000883200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:56 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:56Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="retrieved Address map map[0xc000376000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="retrieved Address map map[0xc000377320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:57 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:57Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="retrieved Address map map[0xc000513320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="retrieved Address map map[0xc0002a38c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:58 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:58Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="retrieved Address map map[0xc0002c4000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="retrieved Address map map[0xc0001e86c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:13:59 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:13:59Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="retrieved Address map map[0xc000336a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="retrieved Address map map[0xc000371d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:00 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:00Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="retrieved Address map map[0xc0002c4c60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="retrieved Address map map[0xc0007bea20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:01 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:01Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="retrieved Address map map[0xc0001b6ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="retrieved Address map map[0xc0001b7c20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:02 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:02Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="retrieved Address map map[0xc0007bf7a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="retrieved Address map map[0xc000408900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:03 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:03Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="retrieved Address map map[0xc000374a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="retrieved Address map map[0xc000374120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:04 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:04Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="retrieved Address map map[0xc000376000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="retrieved Address map map[0xc000377320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:05 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:05Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="retrieved Address map map[0xc000375b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="retrieved Address map map[0xc0002c3e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:06 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:06Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="retrieved Address map map[0xc0002a38c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="retrieved Address map map[0xc000368b40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:07 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:07Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="retrieved Address map map[0xc000335d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="retrieved Address map map[0xc0001b6ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:08 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:08Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="retrieved Address map map[0xc0002c46c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="retrieved Address map map[0xc0004e8480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:09 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:09Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="retrieved Address map map[0xc0001b7c20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="retrieved Address map map[0xc00058aa20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:10 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:10Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="retrieved Address map map[0xc0004e9200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="retrieved Address map map[0xc000712000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:11 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:11Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="retrieved Address map map[0xc00058b7a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="retrieved Address map map[0xc0006a45a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:12 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:12Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="retrieved Address map map[0xc000712d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="retrieved Address map map[0xc000713b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:13 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:13Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="retrieved Address map map[0xc0006a5320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="retrieved Address map map[0xc0006e2120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:14 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:14Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="retrieved Address map map[0xc0007da900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="retrieved Address map map[0xc0007db680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:15 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:15Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="retrieved Address map map[0xc0006e2ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="retrieved Address map map[0xc0006e2120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:16 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:16Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="retrieved Address map map[0xc000376000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="retrieved Address map map[0xc000377320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:17 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:17Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="retrieved Address map map[0xc0002c3680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="retrieved Address map map[0xc0001e8900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:18 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:18Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="retrieved Address map map[0xc000338000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="retrieved Address map map[0xc000335b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:19 hub-master-0.workload.bos2.lab nm-dispatcher[7052]: time="2023-01-23T16:14:19Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7011]: NM resolv-prepender: Timeout occurred Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[5168]: req:31 'up' [eno12399], "/etc/NetworkManager/dispatcher.d/30-resolv-prepender": complete: failed with Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: + [[ ovs-port-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: + '[' -z ']' Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: Not a DHCP4 address. Ignoring. Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7118]: + exit 0 Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7124]: + '[' -z ']' Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7124]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7124]: Not a DHCP6 address. Ignoring. Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7124]: + exit 0 Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:14:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490460.0533] dispatcher: (49) /etc/NetworkManager/dispatcher.d/30-resolv-prepender failed (failed): Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope: Consumed 94ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope completed and consumed the indicated resources. Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7175]: NM resolv-prepender triggered by br-ex up. Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7176]: nameserver 2600:52:7:18::9 Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-9048166dc4a3afd27a72800348fe41a0a6011f0494986f672a24bfd871d160c0-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-9048166dc4a3afd27a72800348fe41a0a6011f0494986f672a24bfd871d160c0-merged.mount has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-9048166dc4a3afd27a72800348fe41a0a6011f0494986f672a24bfd871d160c0-merged.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-9048166dc4a3afd27a72800348fe41a0a6011f0494986f672a24bfd871d160c0-merged.mount completed and consumed the indicated resources. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount completed and consumed the indicated resources. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope: Consumed 111ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c80be2b4b97b22b70700fb65f49d25d36a2ce0684cc05cb482fde6a26d1fe784.scope completed and consumed the indicated resources. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope. -- Subject: Unit libpod-conmon-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0. -- Subject: Unit libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:14:20 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="retrieved Address map map[0xc0001df680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="retrieved Address map map[0xc0003d8480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:20 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:20Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:14:21 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:14:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 12ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="retrieved Address map map[0xc00025e000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="retrieved Address map map[0xc00025ed80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:21 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:21Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="retrieved Address map map[0xc0003d9200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="retrieved Address map map[0xc0006c4000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:22 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:22Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="retrieved Address map map[0xc00025fb00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="retrieved Address map map[0xc000706900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:23 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:23Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="retrieved Address map map[0xc0006c4d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="retrieved Address map map[0xc0006c5b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:24 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:24Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="retrieved Address map map[0xc000707680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="retrieved Address map map[0xc00073a480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:25 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:25Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="retrieved Address map map[0xc00073a000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="retrieved Address map map[0xc00073bb00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:26 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:26Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="retrieved Address map map[0xc0002b0000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="retrieved Address map map[0xc0002c4480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:27 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:27Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="retrieved Address map map[0xc0002a37a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="retrieved Address map map[0xc000336480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:28 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:28Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="retrieved Address map map[0xc000369320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="retrieved Address map map[0xc000374d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:29 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:29Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="retrieved Address map map[0xc0002c5200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="retrieved Address map map[0xc0001deb40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:30 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:30Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="retrieved Address map map[0xc00037a000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="retrieved Address map map[0xc00037ad80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:31 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:31Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="retrieved Address map map[0xc0001dfc20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="retrieved Address map map[0xc0007e8a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:32 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:32Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="retrieved Address map map[0xc00037bb00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="retrieved Address map map[0xc0005a0900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:33 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:33Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="retrieved Address map map[0xc0007e97a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="retrieved Address map map[0xc0004d25a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:34 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:34Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="retrieved Address map map[0xc0005a1680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="retrieved Address map map[0xc0006da000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:35 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:35Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="retrieved Address map map[0xc0002c3c20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:36 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:36Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="retrieved Address map map[0xc0006daea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="retrieved Address map map[0xc0006dbc20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:37 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:37Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="retrieved Address map map[0xc0003387e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="retrieved Address map map[0xc0002c4000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:38 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:38Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="retrieved Address map map[0xc0003377a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="retrieved Address map map[0xc000375320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:39 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:39Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="retrieved Address map map[0xc0001de5a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="retrieved Address map map[0xc0002c4ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:40 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:40Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="retrieved Address map map[0xc0005a07e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="retrieved Address map map[0xc0005a1560:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:41 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:41Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="retrieved Address map map[0xc0001df200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="retrieved Address map map[0xc0006ae360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:42 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:42Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="retrieved Address map map[0xc000466480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="retrieved Address map map[0xc000467200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:43 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:43Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="retrieved Address map map[0xc0006af0e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="retrieved Address map map[0xc0006afe60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:44 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:44Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="retrieved Address map map[0xc00078ec60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="retrieved Address map map[0xc00078f9e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:45 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:45Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="retrieved Address map map[0xc000712000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="retrieved Address map map[0xc000712d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:46 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:46Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="retrieved Address map map[0xc000713b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="retrieved Address map map[0xc000754900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:47 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:47Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="retrieved Address map map[0xc0002c3c20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:48 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:48Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="retrieved Address map map[0xc0003387e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="retrieved Address map map[0xc000336000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:49 hub-master-0.workload.bos2.lab nm-dispatcher[7228]: time="2023-01-23T16:14:49Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7173]: NM resolv-prepender: Timeout occurred Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[5168]: req:32 'up' [br-ex], "/etc/NetworkManager/dispatcher.d/30-resolv-prepender": complete: failed with Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: + [[ br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: + '[' -z ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: Not a DHCP4 address. Ignoring. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7299]: + exit 0 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7305]: + '[' -z ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7305]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7305]: Not a DHCP6 address. Ignoring. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7305]: + exit 0 Jan 23 16:14:50 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490490.1040] dispatcher: (50) /etc/NetworkManager/dispatcher.d/30-resolv-prepender failed (failed): Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7314]: time="2023-01-23T16:14:50Z" level=error msg="container not running" Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope has successfully entered the 'dead' state. Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope: Consumed 94ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0.scope completed and consumed the indicated resources. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: + '[' -z ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: Not a DHCP4 address. Ignoring. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7337]: + exit 0 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7343]: + '[' -z ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7343]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7343]: Not a DHCP6 address. Ignoring. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7343]: + exit 0 Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:14:50 hub-master-0.workload.bos2.lab configure-ovs.sh[6626]: Error: Timeout expired (90 seconds) Jan 23 16:14:50 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + s=3 Jan 23 16:14:50 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + sleep 5 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + INTERFACE_NAME=eno12399 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + OPERATION=pre-up Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' pre-up '!=' pre-up ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7367]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7368]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + INTERFACE_CONNECTION_UUID=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' e68e1ed9-b86e-4f80-9531-ca5523ce55b5 == '' ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7373]: ++ nmcli -t -f connection.slave-type conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7374]: ++ awk -F : '{print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7379]: ++ nmcli -t -f connection.master conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7380]: ++ awk -F : '{print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + PORT=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7385]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7386]: ++ awk -F : '{if( ($1=="bac6281c-e524-4e3e-8259-abe05ad061e7" || $3=="bac6281c-e524-4e3e-8259-abe05ad061e7") && $2~/^ovs*/) print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + PORT_CONNECTION_UUID=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7391]: ++ nmcli -t -f connection.slave-type conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7392]: ++ awk -F : '{print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7397]: ++ nmcli -t -f connection.master conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7398]: ++ awk -F : '{print $NF}' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + BRIDGE_NAME=br-ex Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' br-ex '!=' br-ex ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + ovs-vsctl list interface eno12399 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + declare -A INTERFACES Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + cat /run/ofport_requests.br-ex Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7403]: declare -A INTERFACES=([eno12399]="3" ) Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + source /run/ofport_requests.br-ex Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: ++ INTERFACES=([eno12399]="3") Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: ++ declare -A INTERFACES Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + '[' a ']' Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:14:50 hub-master-0.workload.bos2.lab ovs-vsctl[7404]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7365]: + declare -p INTERFACES Jan 23 16:14:50 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490490.2477] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:50 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490490.2478] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:50 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490490.2482] device (eno12399): Activation: successful, device activated. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7424]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7425]: nameserver 2600:52:7:18::9 Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope. -- Subject: Unit libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228. -- Subject: Unit libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:14:50 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="retrieved Address map map[0xc00018bb00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="retrieved Address map map[0xc0003dc900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:50 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:50Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:51 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:14:51 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:14:51 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 11ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="retrieved Address map map[0xc000246000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="retrieved Address map map[0xc000246d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:51 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:51Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="retrieved Address map map[0xc0003dd680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="retrieved Address map map[0xc000490480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:52 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:52Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="retrieved Address map map[0xc000491200:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="retrieved Address map map[0xc0007d4000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:53 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:53Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="retrieved Address map map[0xc0007d4d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="retrieved Address map map[0xc0007d5b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:54 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:54Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:55 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for i in {1..10} Jan 23 16:14:55 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Attempt 2 to bring up connection ovs-if-phys0' Jan 23 16:14:55 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Attempt 2 to bring up connection ovs-if-phys0 Jan 23 16:14:55 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli conn up ovs-if-phys0 Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1516] agent-manager: agent[a8f13ddde9917007,:1.242/nmcli-connect/0]: agent registered Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1531] device (eno12399): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1533] device (eno12399): releasing ovs interface eno12399 Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1533] device (eno12399): released from master device eno12399 Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1536] device (eno12399): disconnecting for new activation request. Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1537] audit: op="connection-activate" uuid="e68e1ed9-b86e-4f80-9531-ca5523ce55b5" name="ovs-if-phys0" pid=7532 uid=0 result="success" Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00082|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 3 Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1548] device (eno12399): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1558] device (eno12399): Activation: starting connection 'ovs-if-phys0' (e68e1ed9-b86e-4f80-9531-ca5523ce55b5) Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1561] device (eno12399): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1563] device (eno12399): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1566] device (eno12399): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1569] device (eno12399): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.1571] device (eno12399): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler594)|WARN|Dropped 13 log messages in last 107 seconds (most recently, 96 seconds ago) due to excessive rate Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00002|ofproto_dpif_xlate(handler594)|WARN|received packet on unknown port 3 on bridge br-ex while processing tcp,in_port=3,vlan_tci=0x0000,dl_src=b4:96:91:c8:a2:94,dl_dst=b4:96:91:c8:a6:30,nw_src=10.129.0.28,nw_dst=192.168.18.8,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=49408,tp_dst=443,tcp_flags=rst Jan 23 16:14:55 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:14:55 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00083|bridge|INFO|bridge br-ex: added interface eno12399 on port 1 Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00084|bridge|INFO|bridge br-ex: using datapath ID 0000b49691c8a630 Jan 23 16:14:55 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00085|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.2446] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:14:55 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490495.2516] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="retrieved Address map map[0xc000247b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="retrieved Address map map[0xc000276900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:55 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:55Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="retrieved Address map map[0xc0001e8360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="retrieved Address map map[0xc0002c3680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:56 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:56Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="retrieved Address map map[0xc0003318c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="retrieved Address map map[0xc000338240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:57 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:57Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="retrieved Address map map[0xc0002c4480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="retrieved Address map map[0xc000460360:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:58 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:58Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="retrieved Address map map[0xc000373d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="retrieved Address map map[0xc00018ab40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:14:59 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:14:59Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="retrieved Address map map[0xc000461b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="retrieved Address map map[0xc000246900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:00 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:00Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="retrieved Address map map[0xc00018bc20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="retrieved Address map map[0xc0004d6a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:01 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:01Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="retrieved Address map map[0xc000247680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="retrieved Address map map[0xc00080a900:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:02 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:02Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="retrieved Address map map[0xc0004d77a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="retrieved Address map map[0xc0007d45a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:03 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:03Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="retrieved Address map map[0xc00080b680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="retrieved Address map map[0xc00090e480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:04 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:04Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="retrieved Address map map[0xc0007d5320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="retrieved Address map map[0xc0007d4120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:05 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:05Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="retrieved Address map map[0xc000276000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="retrieved Address map map[0xc000276d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:06 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:06Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="retrieved Address map map[0xc00053a240:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a631/64" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="retrieved Address map map[0xc00053afc0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Ignoring filtered route {Ifindex: 7 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=debug msg="Retrieved route map map[7:[{Ifindex: 7 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}]]" Jan 23 16:15:07 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:07Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490508.1176] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:15:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490508.1179] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:15:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490508.1179] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490508.1190] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:15:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490508.1191] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="retrieved Address map map[0xc000277d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="retrieved Address map map[0xc0002c3320:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:08 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:08Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="retrieved Address map map[0xc00053bd40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="retrieved Address map map[0xc000337e60:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:09 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:09Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="retrieved Address map map[0xc000373680:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="retrieved Address map map[0xc000460480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:10 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:10Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="retrieved Address map map[0xc000461b00:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="retrieved Address map map[0xc00018a7e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:11 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:11Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="retrieved Address map map[0xc0002c4120:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="retrieved Address map map[0xc0002c4d80:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:12 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:12Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="retrieved Address map map[0xc00018b440:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="retrieved Address map map[0xc000246480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:13 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:13Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="retrieved Address map map[0xc00031eb40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="retrieved Address map map[0xc00031f7a0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:14 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:14Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="retrieved Address map map[0xc0003f6480:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="retrieved Address map map[0xc0003f70e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:15 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:15Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="retrieved Address map map[0xc0002470e0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="retrieved Address map map[0xc000247d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:16 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:16Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="retrieved Address map map[0xc0003f7d40:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="retrieved Address map map[0xc0004f4a20:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:17 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:17Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="retrieved Address map map[0xc0004f4000:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="retrieved Address map map[0xc0004f58c0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:18 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:18Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="retrieved Address map map[0xc0002a1560:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="retrieved Address map map[0xc000336ea0:[127.0.0.1/8 lo ::1/128]]" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:19 hub-master-0.workload.bos2.lab nm-dispatcher[7461]: time="2023-01-23T16:15:19Z" level=error msg="Failed to find a suitable node IP" Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7422]: NM resolv-prepender: Timeout occurred Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[5168]: req:36 'up' [eno12399], "/etc/NetworkManager/dispatcher.d/30-resolv-prepender": complete: failed with Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: + '[' -z ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: Not a DHCP4 address. Ignoring. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7691]: + exit 0 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7697]: + '[' -z ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7697]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7697]: Not a DHCP6 address. Ignoring. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7697]: + exit 0 Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.3130] dispatcher: (54) /etc/NetworkManager/dispatcher.d/30-resolv-prepender failed (failed): Script '/etc/NetworkManager/dispatcher.d/30-resolv-prepender' exited with status 1. Jan 23 16:15:20 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2603:c020:6:b900:5e7:2ec:2cdb:c668 offline Jan 23 16:15:20 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2604:a880:800:a1::ec9:5001 offline Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has successfully entered the 'dead' state. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope: Consumed 95ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope completed and consumed the indicated resources. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: + '[' -z ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: Not a DHCP4 address. Ignoring. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7736]: + exit 0 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7743]: + '[' -z ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7743]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7743]: Not a DHCP6 address. Ignoring. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7743]: + exit 0 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + INTERFACE_NAME=eno12399 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + OPERATION=pre-up Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' pre-up '!=' pre-up ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7765]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7766]: ++ awk -F : '{if($1=="eno12399" && $2!~/^ovs*/) print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + INTERFACE_CONNECTION_UUID=e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' e68e1ed9-b86e-4f80-9531-ca5523ce55b5 == '' ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7771]: ++ nmcli -t -f connection.slave-type conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7772]: ++ awk -F : '{print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-8ad723ac515582b392ae1fe0dd375b998cb261ce8530daa375f156f53642c124-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-8ad723ac515582b392ae1fe0dd375b998cb261ce8530daa375f156f53642c124-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-8ad723ac515582b392ae1fe0dd375b998cb261ce8530daa375f156f53642c124-merged.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-8ad723ac515582b392ae1fe0dd375b998cb261ce8530daa375f156f53642c124-merged.mount completed and consumed the indicated resources. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' ovs-port '!=' ovs-port ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7777]: ++ nmcli -t -f connection.master conn show e68e1ed9-b86e-4f80-9531-ca5523ce55b5 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7778]: ++ awk -F : '{print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + PORT=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7789]: ++ awk -F : '{if( ($1=="bac6281c-e524-4e3e-8259-abe05ad061e7" || $3=="bac6281c-e524-4e3e-8259-abe05ad061e7") && $2~/^ovs*/) print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7788]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + PORT_CONNECTION_UUID=bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' bac6281c-e524-4e3e-8259-abe05ad061e7 == '' ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7794]: ++ nmcli -t -f connection.slave-type conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7795]: ++ awk -F : '{print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' ovs-bridge '!=' ovs-bridge ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7801]: ++ awk -F : '{print $NF}' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7800]: ++ nmcli -t -f connection.master conn show bac6281c-e524-4e3e-8259-abe05ad061e7 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + BRIDGE_NAME=br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' br-ex '!=' br-ex ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + ovs-vsctl list interface eno12399 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + declare -A INTERFACES Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' -f /run/ofport_requests.br-ex ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + cat /run/ofport_requests.br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7806]: declare -A INTERFACES=([eno12399]="3" ) Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + source /run/ofport_requests.br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: ++ INTERFACES=([eno12399]="3") Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: ++ declare -A INTERFACES Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + '[' a ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope has successfully entered the 'dead' state. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope: Consumed 128ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-1b426f34a9baa43f1c1dbc26fd8d59773d0dc6fcdbb151116c1dd36e830e8228.scope completed and consumed the indicated resources. Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vsctl[7807]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface eno12399 ofport_request=3 Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00086|bridge|INFO|bridge br-ex: deleted interface eno12399 on port 1 Jan 23 16:15:20 hub-master-0.workload.bos2.lab kernel: device eno12399 left promiscuous mode Jan 23 16:15:20 hub-master-0.workload.bos2.lab kernel: device eno12399 entered promiscuous mode Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00087|bridge|INFO|bridge br-ex: added interface eno12399 on port 3 Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7763]: + declare -p INTERFACES Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4622] device (eno12399): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4623] device (eno12399): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4625] manager: NetworkManager state is now CONNECTED_LOCAL Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4627] device (eno12399): Activation: successful, device activated. Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4629] manager: startup complete Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[7532]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/18) Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + s=0 Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + break Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 0 -eq 0 ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Brought up connection ovs-if-phys0 successfully' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Brought up connection ovs-if-phys0 successfully Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c mod ovs-if-phys0 connection.autoconnect yes Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4669] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.4818] audit: op="connection-update" uuid="e68e1ed9-b86e-4f80-9531-ca5523ce55b5" name="ovs-if-phys0" args="connection.autoconnect,connection.timestamp" pid=7813 uid=0 result="success" Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for conn in "${connections[@]}" Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[7827]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7840]: NM resolv-prepender triggered by eno12399 up. Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7840]: NM resolv-prepender: NM resolv.conf still empty of nameserver Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local slave_type=ovs-port Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local is_slave=false Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = team ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' ovs-port = bond ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local master_interface Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[7843]: ++ nmcli -g GENERAL.STATE conn show ovs-if-br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + local active_state= Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' '' == activated ']' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + for i in {1..10} Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Attempt 1 to bring up connection ovs-if-br-ex' Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Attempt 1 to bring up connection ovs-if-br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli conn up ovs-if-br-ex Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5239] agent-manager: agent[59e91008f6272cb7,:1.256/nmcli-connect/0]: agent registered Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5247] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5249] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5251] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (94338756-3372-4447-bf85-a1e57729e56c) Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5251] audit: op="connection-activate" uuid="94338756-3372-4447-bf85-a1e57729e56c" name="ovs-if-br-ex" pid=7847 uid=0 result="success" Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5251] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5252] manager: NetworkManager state is now CONNECTING Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5253] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5254] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5257] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00088|netdev|WARN|failed to set MTU for network device br-ex: No such device Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00089|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Jan 23 16:15:20 hub-master-0.workload.bos2.lab kernel: device br-ex entered promiscuous mode Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5293] device (br-ex): set-hw-addr: set-cloned MAC address to B4:96:91:C8:A6:30 (B4:96:91:C8:A6:30) Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5294] device (br-ex): carrier: link connected Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5300] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd-udevd[7853]: Using default interface naming scheme 'rhel-8.0'. Jan 23 16:15:20 hub-master-0.workload.bos2.lab systemd-udevd[7853]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5355] dhcp4 (br-ex): state changed new lease, address=192.168.18.12 Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5357] ovs: ovs interface "patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab" ((null)) failed: No usable peer 'patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int' exists in 'system' datapath. Jan 23 16:15:20 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490520.5358] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv4 routing and DNS Jan 23 16:15:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00001|ofproto_dpif_xlate(handler600)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=3,vlan_tci=0x0000,dl_src=0a:58:64:40:00:01,dl_dst=0a:58:64:40:00:02,nw_src=10.130.0.51,nw_dst=192.168.18.12,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=48426,tp_dst=9103,tcp_flags=syn Jan 23 16:15:20 hub-master-0.workload.bos2.lab nm-dispatcher[7858]: nameserver 192.168.18.9 Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope. -- Subject: Unit libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 12ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9. -- Subject: Unit libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="retrieved Address map map[0xc0002f2360:[127.0.0.1/8 lo ::1/128] 0xc0002f2fc0:[192.168.18.12/25 br-ex]]" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7893]: time="2023-01-23T16:15:21Z" level=info msg="Chosen Node IPs: [192.168.18.12]" Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has successfully entered the 'dead' state. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope completed and consumed the indicated resources. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-e3cdf7d5abe20c5ce9b318a4350fd205225afd163abf5a5c67a8e3d562831b72-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-e3cdf7d5abe20c5ce9b318a4350fd205225afd163abf5a5c67a8e3d562831b72-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope has successfully entered the 'dead' state. Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope: Consumed 104ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-6810bd8a7e772f2c23a5b84a3809751ba64256193394f7aae6704753755a18f9.scope completed and consumed the indicated resources. Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7985]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[7840]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:15:21 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490521.8092] audit: op="reload" arg="2" pid=7995 uid=0 result="success" Jan 23 16:15:21 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490521.8093] config: signal: DNS_RC Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: + [[ ovs-if-phys0 == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: + '[' -z ']' Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: Not a DHCP4 address. Ignoring. Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8000]: + exit 0 Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8001]: + '[' -z ']' Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8001]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8001]: Not a DHCP6 address. Ignoring. Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8001]: + exit 0 Jan 23 16:15:21 hub-master-0.workload.bos2.lab systemd[1]: Starting Generate console-login-helper-messages issue snippet... -- Subject: Unit console-login-helper-messages-issuegen.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has begun starting up. Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8030]: NM resolv-prepender triggered by br-ex dhcp4-change. Jan 23 16:15:21 hub-master-0.workload.bos2.lab nm-dispatcher[8031]: nameserver 192.168.18.9 Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope. -- Subject: Unit libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55. -- Subject: Unit libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="retrieved Address map map[0xc0001c66c0:[127.0.0.1/8 lo ::1/128] 0xc0001c7320:[192.168.18.12/25 br-ex]]" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Retrieved route map map[]" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8068]: time="2023-01-23T16:15:22Z" level=info msg="Chosen Node IPs: [192.168.18.12]" Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has successfully entered the 'dead' state. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope completed and consumed the indicated resources. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope has successfully entered the 'dead' state. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope: Consumed 119ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55.scope completed and consumed the indicated resources. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-c933abda55d3c57e6920d1925ab8edde5f20959b3899ba5dbe6e0fecefe5f3d2-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-c933abda55d3c57e6920d1925ab8edde5f20959b3899ba5dbe6e0fecefe5f3d2-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-9fdf0abe2689dd1e2283b81d25b218b99eb2d206615388553be70f1853a99e55-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8154]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8030]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:15:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490522.6720] audit: op="reload" arg="2" pid=8164 uid=0 result="success" Jan 23 16:15:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490522.6720] config: signal: DNS_RC Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: + '[' -z ']' Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: Not a DHCP4 address. Ignoring. Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8169]: + exit 0 Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8170]: + '[' -z ']' Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8170]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8170]: Not a DHCP6 address. Ignoring. Jan 23 16:15:22 hub-master-0.workload.bos2.lab nm-dispatcher[8170]: + exit 0 Jan 23 16:15:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490522.6844] dhcp6 (br-ex): activation: beginning transaction (timeout in 45 seconds) Jan 23 16:15:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490522.6848] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv6 routing and DNS Jan 23 16:15:22 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490522.6857] dhcp6 (br-ex): state changed new lease, address=2600:52:7:18::12 Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service has successfully entered the 'dead' state. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: Started Generate console-login-helper-messages issue snippet. -- Subject: Unit console-login-helper-messages-issuegen.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit console-login-helper-messages-issuegen.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:22 hub-master-0.workload.bos2.lab systemd[1]: console-login-helper-messages-issuegen.service: Consumed 11ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit console-login-helper-messages-issuegen.service completed and consumed the indicated resources. Jan 23 16:15:24 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490524.2927] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8195]: NM resolv-prepender triggered by br-ex dhcp6-change. Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8196]: nameserver 192.168.18.9 Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8196]: nameserver 2600:52:7:18::9 Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope. -- Subject: Unit libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e. -- Subject: Unit libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="retrieved Address map map[0xc000037320:[127.0.0.1/8 lo ::1/128] 0xc00037a000:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Retrieved route map map[15:[{Ifindex: 15 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:24 hub-master-0.workload.bos2.lab nm-dispatcher[8232]: time="2023-01-23T16:15:24Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has successfully entered the 'dead' state. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope completed and consumed the indicated resources. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-c419fc694c2642ffd7c65e68b1816cdba604c39a6457bd762bde554ef50e8352-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-c419fc694c2642ffd7c65e68b1816cdba604c39a6457bd762bde554ef50e8352-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope has successfully entered the 'dead' state. Jan 23 16:15:24 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope: Consumed 105ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-148d4bc7cf04e4032c51282274db497519e2cae4523ddee1fc4ab4ce9e63598e.scope completed and consumed the indicated resources. Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8313]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8195]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1150] audit: op="reload" arg="2" pid=8323 uid=0 result="success" Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1151] config: signal: DNS_RC Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: + '[' -z ']' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: Not a DHCP4 address. Ignoring. Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8328]: + exit 0 Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8329]: + '[' -z ']' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8329]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8329]: Not a DHCP6 address. Ignoring. Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8329]: + exit 0 Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + INTERFACE_NAME=br-ex Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + OPERATION=pre-up Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + '[' pre-up '!=' pre-up ']' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8338]: ++ nmcli -t -f device,type,uuid conn Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8339]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + INTERFACE_CONNECTION_UUID= Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + '[' '' == '' ']' Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8336]: + exit 0 Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1577] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1578] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1579] manager: NetworkManager state is now CONNECTED_SITE Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1580] device (br-ex): Activation: successful, device activated. Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[7847]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/19) Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1583] manager: NetworkManager state is now CONNECTED_GLOBAL Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + s=0 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + break Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 0 -eq 0 ']' Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Brought up connection ovs-if-br-ex successfully' Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Brought up connection ovs-if-br-ex successfully Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + false Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli c mod ovs-if-br-ex connection.autoconnect yes Jan 23 16:15:25 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490525.1758] audit: op="connection-update" uuid="94338756-3372-4447-bf85-a1e57729e56c" name="ovs-if-br-ex" args="connection.autoconnect,connection.timestamp" pid=8344 uid=0 result="success" Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + try_to_bind_ipv6_address Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + retries=60 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ 60 -eq 0 ]] Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[8358]: ++ ip -6 -j addr Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[8360]: ++ jq -r 'first(.[] | select(.ifname=="br-ex") | .addr_info[] | select(.scope=="global") | .local)' Jan 23 16:15:25 hub-master-0.workload.bos2.lab chronyd[2922]: Source 192.168.18.9 online Jan 23 16:15:25 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2603:c020:6:b900:5e7:2ec:2cdb:c668 online Jan 23 16:15:25 hub-master-0.workload.bos2.lab chronyd[2922]: Source 2604:a880:800:a1::ec9:5001 online Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip=2600:52:7:18::12 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ 2600:52:7:18::12 == '' ]] Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[8395]: ++ shuf -i 50000-60000 -n 1 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + random_port=55693 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Trying to bind 2600:52:7:18::12 on port 55693' Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Trying to bind 2600:52:7:18::12 on port 55693 Jan 23 16:15:25 hub-master-0.workload.bos2.lab configure-ovs.sh[8396]: ++ timeout 2s nc -l 2600:52:7:18::12 55693 Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8399]: NM resolv-prepender triggered by br-ex up. Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8405]: nameserver 192.168.18.9 Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8405]: nameserver 2600:52:7:18::9 Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: Started libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope. -- Subject: Unit libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64. -- Subject: Unit libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=info msg="Parsed Virtual IP 192.168.18.7" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=info msg="Parsed Virtual IP 192.168.18.8" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered address fe80::c476:deff:fe0c:d9da/64" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered address fe80::b696:91ff:fec8:a630/64" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="retrieved Address map map[0xc0003778c0:[127.0.0.1/8 lo ::1/128] 0xc00037a5a0:[192.168.18.12/25 br-ex 2600:52:7:18::12/128]]" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.7" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Checking whether address 127.0.0.1/8 lo contains VIP 192.168.18.8" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: 192.168.18.12 Gw: 192.168.18.1 Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 192.168.18.0/25 Src: 192.168.18.12 Gw: Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 1 Dst: ::1/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: 2600:52:7:18::12/128 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 10 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: fe80::/64 Src: Gw: Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Ignoring filtered route {Ifindex: 15 Dst: Src: Gw: fe80::1532:4e62:7604:4733 Flags: [] Table: 254}" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Retrieved route map map[15:[{Ifindex: 15 Dst: 2600:52:7:18::/64 Src: Gw: Flags: [] Table: 254}]]" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Checking whether address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=debug msg="Address 192.168.18.12/25 br-ex contains VIP 192.168.18.7" Jan 23 16:15:25 hub-master-0.workload.bos2.lab nm-dispatcher[8438]: time="2023-01-23T16:15:25Z" level=info msg="Chosen Node IPs: [192.168.18.12 2600:52:7:18::12]" Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has successfully entered the 'dead' state. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope completed and consumed the indicated resources. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-d42f5241b174d5d0de09e2b6c47e3553d1bb57d133f6d9864d12b974cd8ae642-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-d42f5241b174d5d0de09e2b6c47e3553d1bb57d133f6d9864d12b974cd8ae642-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope has successfully entered the 'dead' state. Jan 23 16:15:25 hub-master-0.workload.bos2.lab systemd[1]: libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope: Consumed 98ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-84996068348cad54adc39959f28898ac4ead6fbded5f0caca83e343a79ea8f64.scope completed and consumed the indicated resources. Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8515]: Failed to get unit file state for systemd-resolved.service: No such file or directory Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8399]: NM resolv-prepender: Prepending 'nameserver 192.168.18.12' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) Jan 23 16:15:26 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490526.0147] audit: op="reload" arg="2" pid=8525 uid=0 result="success" Jan 23 16:15:26 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490526.0148] config: signal: DNS_RC Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + [[ ovs-if-br-ex == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + '[' -z 192.168.18.12 ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + '[' 86400 -lt 4294967295 ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + echo 'Not an infinite DHCP4 lease. Ignoring.' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: Not an infinite DHCP4 lease. Ignoring. Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8530]: + exit 0 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + '[' -z 2600:52:7:18::12 ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8533]: ++ ip -j -6 a show br-ex Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8534]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .preferred_life_time' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + LEASE_TIME=43197 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8536]: ++ ip -j -6 a show br-ex Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8537]: ++ jq -r '.[].addr_info[] | select(.scope=="global") | select(.deprecated!=true) | select(.local=="2600:52:7:18::12") | .prefixlen' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + PREFIX_LEN=128 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + '[' 43197 -lt 4294967295 ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + echo 'Not an infinite DHCP6 lease. Ignoring.' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: Not an infinite DHCP6 lease. Ignoring. Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8531]: + exit 0 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: + [[ OVNKubernetes == \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: + [[ '' == \W\i\r\e\d\ \C\o\n\n\e\c\t\i\o\n ]] Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: + '[' -z ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: + echo 'Not a DHCP4 address. Ignoring.' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: Not a DHCP4 address. Ignoring. Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8554]: + exit 0 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8555]: + '[' -z ']' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8555]: + echo 'Not a DHCP6 address. Ignoring.' Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8555]: Not a DHCP6 address. Ignoring. Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8555]: + exit 0 Jan 23 16:15:26 hub-master-0.workload.bos2.lab nm-dispatcher[8558]: Error: Device '' not found. Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8396]: ++ echo 124 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + exit_code=124 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ exit_code -eq 124 ]] Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Address bound successfully' Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Address bound successfully Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + break Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + [[ 60 -eq 0 ]] Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + handle_exit Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + e=0 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + '[' 0 -eq 0 ']' Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + print_state Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + echo 'Current device, connection, interface and routing state:' Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: Current device, connection, interface and routing state: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8562]: + nmcli -g all device Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: + grep -v unmanaged Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: br-ex:ovs-interface:connected:full:full:/org/freedesktop/NetworkManager/Devices/27:ovs-if-br-ex:94338756-3372-4447-bf85-a1e57729e56c:/org/freedesktop/NetworkManager/ActiveConnection/19 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: eno12399:ethernet:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/10:ovs-if-phys0:e68e1ed9-b86e-4f80-9531-ca5523ce55b5:/org/freedesktop/NetworkManager/ActiveConnection/18 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: br-ex:ovs-bridge:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/24:br-ex:69b0fd5a-9982-4dfb-a0ff-9478dcfb5700:/org/freedesktop/NetworkManager/ActiveConnection/12 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: br-ex:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/26:ovs-port-br-ex:a6d2b9b5-7a03-4001-9502-ea4b59e4c55d:/org/freedesktop/NetworkManager/ActiveConnection/13 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: eno12399:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/25:ovs-port-phys0:bac6281c-e524-4e3e-8259-abe05ad061e7:/org/freedesktop/NetworkManager/ActiveConnection/14 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: patch-br-int-to-br-ex_hub-master-0.workload.bos2.lab:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/15::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: eno12409:ethernet:disconnected:none:none:/org/freedesktop/NetworkManager/Devices/4::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: eno8303:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/5::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: eno8403:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/6::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: ens2f0:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/7::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8563]: ens2f1:ethernet:unavailable:none:none:/org/freedesktop/NetworkManager/Devices/8::: Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + nmcli -g all connection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: ovs-if-br-ex:94338756-3372-4447-bf85-a1e57729e56c:ovs-interface:1674490525:Mon Jan 23 16\:15\:25 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/12:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/19:ovs-port:/etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: br-ex:69b0fd5a-9982-4dfb-a0ff-9478dcfb5700:ovs-bridge:1674490399:Mon Jan 23 16\:13\:19 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/8:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/12::/etc/NetworkManager/system-connections/br-ex.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: ovs-if-phys0:e68e1ed9-b86e-4f80-9531-ca5523ce55b5:802-3-ethernet:1674490520:Mon Jan 23 16\:15\:20 2023:yes:100:no:/org/freedesktop/NetworkManager/Settings/11:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/18:ovs-port:/etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: ovs-port-br-ex:a6d2b9b5-7a03-4001-9502-ea4b59e4c55d:ovs-port:1674490399:Mon Jan 23 16\:13\:19 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/10:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/13:ovs-bridge:/etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: ovs-port-phys0:bac6281c-e524-4e3e-8259-abe05ad061e7:ovs-port:1674490399:Mon Jan 23 16\:13\:19 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/9:yes:eno12399:activated:/org/freedesktop/NetworkManager/ActiveConnection/14:ovs-bridge:/etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: Wired Connection:8105e4a7-d75c-4c11-b250-7d472ed203fe:802-3-ethernet:0:never:yes:0:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8567]: Wired Connection:99853833-baac-4bca-8508-0bff9efdaf37:802-3-ethernet:1674490399:Mon Jan 23 16\:13\:19 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:no:::::/etc/NetworkManager/system-connections/default_connection.nmconnection Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -d address show Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: ovs-configuration.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit ovs-configuration.service has successfully entered the 'dead' state. Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet 127.0.0.1/8 scope host lo Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft forever preferred_lft forever Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet6 ::1/128 scope host Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft forever preferred_lft forever Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 2: eno8303: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether b0:7b:25:de:1a:bc brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 3: eno12399: mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9702 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch_slave numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 4: ens2f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether 04:3f:72:fe:d9:b8 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 5: eno8403: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether b0:7b:25:de:1a:bd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 60 maxmtu 9000 numtxqueues 5 numrxqueues 5 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 6: ens2f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether 04:3f:72:fe:d9:b9 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 768 numrxqueues 126 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 7: eno12409: mtu 1500 qdisc mq state UP group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether b4:96:91:c8:a6:31 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9702 numtxqueues 112 numrxqueues 112 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab mco-hostname[8575]: waiting for non-localhost hostname to be assigned Jan 23 16:15:27 hub-master-0.workload.bos2.lab mco-hostname[8575]: node identified as hub-master-0.workload.bos2.lab Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Configures OVS with proper host networking configuration. -- Subject: Unit ovs-configuration.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit ovs-configuration.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab rpc.statd[8595]: Version 2.3.3 starting Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 8: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether e6:05:44:0a:7c:b5 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 10: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether c6:76:de:0c:d9:da brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet6 fe80::c476:deff:fe0c:d9da/64 scope link Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft forever preferred_lft forever Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 11: ovn-k8s-mp0: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether 12:16:15:ff:96:b9 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 12: br-int: mtu 1400 qdisc noop state DOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether ba:22:7f:9b:cf:d8 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: 15: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: link/ether b4:96:91:c8:a6:30 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet 192.168.18.12/25 brd 192.168.18.127 scope global dynamic noprefixroute br-ex Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft 86393sec preferred_lft 86393sec Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet6 2600:52:7:18::12/128 scope global dynamic noprefixroute Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft 43195sec preferred_lft 43195sec Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: inet6 fe80::b696:91ff:fec8:a630/64 scope link noprefixroute Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8571]: valid_lft forever preferred_lft forever Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: ovs-configuration.service: Consumed 1.084s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit ovs-configuration.service completed and consumed the indicated resources. Jan 23 16:15:27 hub-master-0.workload.bos2.lab rpc.statd[8595]: Flags: TI-RPC Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip route show Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting Wait for a non-localhost hostname... -- Subject: Unit node-valid-hostname.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit node-valid-hostname.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8572]: default via 192.168.18.1 dev br-ex proto dhcp src 192.168.18.12 metric 48 Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8572]: 192.168.18.0/25 dev br-ex proto kernel scope link src 192.168.18.12 metric 48 Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Wait for a non-localhost hostname. -- Subject: Unit node-valid-hostname.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit node-valid-hostname.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + ip -6 route show Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Reached target Network is Online. -- Subject: Unit network-online.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit network-online.target has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.298728564Z" level=info msg="Starting CRI-O, version: 1.25.1-5.rhaos4.12.git6005903.el8, git: unknown(clean)" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.298900634Z" level=info msg="Node configuration value for hugetlb cgroup is true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.298907645Z" level=info msg="Node configuration value for pid cgroup is true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.298945632Z" level=info msg="Node configuration value for memoryswap cgroup is true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.298950292Z" level=info msg="Node configuration value for cgroup v2 is false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.304042981Z" level=info msg="Node configuration value for systemd CollectMode is true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: ::1 dev lo proto kernel metric 256 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: 2600:52:7:18::12 dev br-ex proto kernel metric 48 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: 2600:52:7:18::/64 dev br-ex proto ra metric 48 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: fe80::/64 dev br-ex proto kernel metric 1024 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[8573]: default via fe80::1532:4e62:7604:4733 dev br-ex proto ra metric 48 pref medium Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.308465053Z" level=info msg="Node configuration value for systemd AllowedCPUs is true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.309321661Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" Jan 23 16:15:27 hub-master-0.workload.bos2.lab configure-ovs.sh[5125]: + exit 0 Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... -- Subject: Unit rpc-statd.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpc-statd.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting Dynamically sets the system reserved for the kubelet... -- Subject: Unit kubelet-auto-node-size.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubelet-auto-node-size.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Dynamically sets the system reserved for the kubelet. -- Subject: Unit kubelet-auto-node-size.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubelet-auto-node-size.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting RPC Bind... -- Subject: Unit rpcbind.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpcbind.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)... -- Subject: Unit crio.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started RPC Bind. -- Subject: Unit rpcbind.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpcbind.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started NFS status monitor for NFSv2/3 locking.. -- Subject: Unit rpc-statd.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpc-statd.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359652507Z" level=info msg="Checkpoint/restore support disabled" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359667734Z" level=info msg="Using seccomp default profile when unspecified: true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359672961Z" level=info msg="Using the internal default seccomp profile" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359677490Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359682011Z" level=info msg="No blockio config file specified, blockio not configured" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.359686329Z" level=info msg="RDT not available in the host system" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.362596415Z" level=info msg="Conmon does support the --sync option" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.362616515Z" level=info msg="Conmon does support the --log-global-size-max option" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.365115674Z" level=info msg="Conmon does support the --sync option" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.365131367Z" level=info msg="Conmon does support the --log-global-size-max option" Jan 23 16:15:27 hub-master-0.workload.bos2.lab chronyd[2922]: Selected source 192.168.18.9 Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.450661804Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.450678615Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.491741278Z" level=warning msg="Error encountered when checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.492312035Z" level=info msg="Serving metrics on :9537 via HTTP" Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Container Runtime Interface for OCI (CRI-O). -- Subject: Unit crio.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting Kubernetes Kubelet... -- Subject: Unit kubelet.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubelet.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.793075 8631 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794217 8631 flags.go:64] FLAG: --add-dir-header="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794225 8631 flags.go:64] FLAG: --address="192.168.18.12" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794229 8631 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794233 8631 flags.go:64] FLAG: --alsologtostderr="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794236 8631 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794239 8631 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794242 8631 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794244 8631 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794247 8631 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794250 8631 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794253 8631 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794256 8631 flags.go:64] FLAG: --azure-container-registry-config="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794258 8631 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794261 8631 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794263 8631 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794266 8631 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794268 8631 flags.go:64] FLAG: --cgroup-root="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794270 8631 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794273 8631 flags.go:64] FLAG: --client-ca-file="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794275 8631 flags.go:64] FLAG: --cloud-config="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794277 8631 flags.go:64] FLAG: --cloud-provider="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794279 8631 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794282 8631 flags.go:64] FLAG: --cluster-domain="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794284 8631 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794287 8631 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794290 8631 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794293 8631 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794295 8631 flags.go:64] FLAG: --container-runtime="remote" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794297 8631 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794300 8631 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794303 8631 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794305 8631 flags.go:64] FLAG: --contention-profiling="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794308 8631 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794310 8631 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794313 8631 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794316 8631 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794320 8631 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794322 8631 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794324 8631 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794327 8631 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794329 8631 flags.go:64] FLAG: --enable-server="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794331 8631 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794335 8631 flags.go:64] FLAG: --event-burst="10" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794337 8631 flags.go:64] FLAG: --event-qps="5" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794340 8631 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794342 8631 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794344 8631 flags.go:64] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794352 8631 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794355 8631 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794357 8631 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794359 8631 flags.go:64] FLAG: --eviction-soft="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794362 8631 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794364 8631 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794366 8631 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794369 8631 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794371 8631 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794373 8631 flags.go:64] FLAG: --feature-gates="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794376 8631 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794378 8631 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794381 8631 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794383 8631 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794385 8631 flags.go:64] FLAG: --healthz-port="10248" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794400 8631 flags.go:64] FLAG: --help="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794403 8631 flags.go:64] FLAG: --hostname-override="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794406 8631 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794409 8631 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794411 8631 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794414 8631 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794416 8631 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794419 8631 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794421 8631 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794423 8631 flags.go:64] FLAG: --iptables-drop-bit="15" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794426 8631 flags.go:64] FLAG: --iptables-masquerade-bit="14" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794428 8631 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794430 8631 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794433 8631 flags.go:64] FLAG: --kube-api-burst="10" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794435 8631 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794438 8631 flags.go:64] FLAG: --kube-api-qps="5" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794440 8631 flags.go:64] FLAG: --kube-reserved="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794442 8631 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794445 8631 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794447 8631 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794450 8631 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794452 8631 flags.go:64] FLAG: --lock-file="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794454 8631 flags.go:64] FLAG: --log-backtrace-at=":0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794457 8631 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794460 8631 flags.go:64] FLAG: --log-dir="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794462 8631 flags.go:64] FLAG: --log-file="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794465 8631 flags.go:64] FLAG: --log-file-max-size="1800" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794467 8631 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794470 8631 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794474 8631 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794476 8631 flags.go:64] FLAG: --logging-format="text" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794478 8631 flags.go:64] FLAG: --logtostderr="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794484 8631 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794487 8631 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794489 8631 flags.go:64] FLAG: --manifest-url="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794491 8631 flags.go:64] FLAG: --manifest-url-header="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794495 8631 flags.go:64] FLAG: --master-service-namespace="default" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794497 8631 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794500 8631 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794503 8631 flags.go:64] FLAG: --max-pods="110" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794505 8631 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794508 8631 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794510 8631 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794513 8631 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794515 8631 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794517 8631 flags.go:64] FLAG: --node-ip="192.168.18.12" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794520 8631 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794525 8631 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794528 8631 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794530 8631 flags.go:64] FLAG: --one-output="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794533 8631 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794535 8631 flags.go:64] FLAG: --pod-cidr="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794537 8631 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794541 8631 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794543 8631 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794546 8631 flags.go:64] FLAG: --pods-per-core="0" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794548 8631 flags.go:64] FLAG: --port="10250" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794551 8631 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794553 8631 flags.go:64] FLAG: --provider-id="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794555 8631 flags.go:64] FLAG: --qos-reserved="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794558 8631 flags.go:64] FLAG: --read-only-port="10255" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794560 8631 flags.go:64] FLAG: --register-node="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794562 8631 flags.go:64] FLAG: --register-schedulable="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794565 8631 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794570 8631 flags.go:64] FLAG: --registry-burst="10" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794573 8631 flags.go:64] FLAG: --registry-qps="5" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794576 8631 flags.go:64] FLAG: --reserved-cpus="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794578 8631 flags.go:64] FLAG: --reserved-memory="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794581 8631 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794584 8631 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794586 8631 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794589 8631 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794591 8631 flags.go:64] FLAG: --runonce="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794593 8631 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794596 8631 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794598 8631 flags.go:64] FLAG: --seccomp-default="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794601 8631 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794603 8631 flags.go:64] FLAG: --skip-headers="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794606 8631 flags.go:64] FLAG: --skip-log-headers="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794608 8631 flags.go:64] FLAG: --stderrthreshold="2" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794611 8631 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794613 8631 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794616 8631 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794618 8631 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794621 8631 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794623 8631 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794625 8631 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794628 8631 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794630 8631 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794632 8631 flags.go:64] FLAG: --system-cgroups="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794635 8631 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794639 8631 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794641 8631 flags.go:64] FLAG: --tls-cert-file="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794644 8631 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794646 8631 flags.go:64] FLAG: --tls-min-version="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794649 8631 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794652 8631 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794656 8631 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794658 8631 flags.go:64] FLAG: --v="2" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794662 8631 flags.go:64] FLAG: --version="false" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794665 8631 flags.go:64] FLAG: --vmodule="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794668 8631 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794671 8631 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.794714 8631 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.796964 8631 server.go:413] "Kubelet version" kubeletVersion="v1.25.4+77bec7a" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.796978 8631 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.797015 8631 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.797069 8631 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.797154 8631 server.go:825] "Client rotation is on, will bootstrap in background" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.798693 8631 bootstrap.go:84] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.798732 8631 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.798875 8631 server.go:882] "Starting client certificate rotation" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.798881 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.799051 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-01-23 18:46:05 +0000 UTC, rotation deadline is 2023-01-23 17:46:41.48216604 +0000 UTC Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.799073 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1h31m13.68309402s for next certificate rotation Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.800795 8631 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.800871 8631 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.801916 8631 manager.go:163] cAdvisor running in container: "/system.slice/kubelet.service" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.802877 8631 fs.go:133] Filesystem UUIDs: map[6b5eaf26-520d-4e42-90f4-4869c15c705f:/dev/sda3 AFB5-0367:/dev/sda2 b7d7393a-4ab5-4434-a099-e66267f4b07d:/dev/sda4] Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.802889 8631 fs.go:134] Filesystem partitions: map[/dev/sda3:{mountpoint:/boot major:8 minor:3 fsType:ext4 blockSize:0} /dev/sda4:{mountpoint:/var major:8 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/1fa077883cf44c98066a10a131c5c196782825d0cbdf5d91df8b6a27107e7008/merged major:0 minor:48 fsType:overlay blockSize:0}] Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.802915 8631 nvidia.go:54] NVIDIA GPU metrics disabled Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.844462 8631 manager.go:212] Machine: {Timestamp:2023-01-23 16:15:27.838144107 +0000 UTC m=+0.259869066 CPUVendorID:GenuineIntel NumCores:112 NumPhysicalCores:28 NumSockets:2 CpuFrequency:3400000 MemoryCapacity:269716951040 MemoryByType:map[Unbuffered-DDR4:0xc001034240] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:2d1c3c5e56e644be8e0c86aaa5d61f6a SystemUUID:4c4c4544-0051-4e10-8031-b3c04f4e4833 BootID:9ff82f2a-95c8-48a8-9dd4-6f8019a7e250 Filesystems:[{Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:134858473472 Type:vfs Inodes:32924432 HasInodes:true} {Device:/dev/sda4 DeviceMajor:8 DeviceMinor:4 Capacity:479011516416 Type:vfs Inodes:233897408 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:43 Capacity:134858473472 Type:vfs Inodes:32924432 HasInodes:true} {Device:/dev/sda3 DeviceMajor:8 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/f301880c9d0864974d9ddab2965ec54668acfcd53a6d5fce14e9ad80bdaa36a0/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:65536000 Type:vfs Inodes:32924432 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:479011516416 Type:vfs Inodes:233897408 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:134858473472 Type:vfs Inodes:32924432 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:134858473472 Type:vfs Inodes:32924432 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:479559942144 Scheduler:mq-deadline} 8:16:{Name:sdb Major:8 Minor:16 Size:479559942144 Scheduler:mq-deadline}] NetworkDevices:[{Name:br-ex MacAddress:b4:96:91:c8:a6:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:ba:22:7f:9b:cf:d8 Speed:0 Mtu:1400} {Name:eno12399 MacAddress:b4:96:91:c8:a6:30 Speed:25000 Mtu:1500} {Name:eno12409 MacAddress:b4:96:91:c8:a6:31 Speed:25000 Mtu:1500} {Name:eno8303 MacAddress:b0:7b:25:de:1a:bc Speed:-1 Mtu:1500} {Name:eno8403 MacAddress:b0:7b:25:de:1a:bd Speed:-1 Mtu:1500} {Name:ens2f0 MacAddress:04:3f:72:fe:d9:b8 Speed:-1 Mtu:1500} {Name:ens2f1 MacAddress:04:3f:72:fe:d9:b9 Speed:-1 Mtu:1500} {Name:genev_sys_6081 MacAddress:c6:76:de:0c:d9:da Speed:0 Mtu:65000} {Name:ovn-k8s-mp0 MacAddress:12:16:15:ff:96:b9 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:05:44:0a:7c:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:134485037056 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 56] Caches:[{Id:0 Size:49152 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:16 Threads:[10 66] Caches:[{Id:16 Size:49152 Type:Data Level:1} {Id:16 Size:32768 Type:Instruction Level:1} {Id:16 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:11 Threads:[100 44] Caches:[{Id:11 Size:49152 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:25 Threads:[102 46] Caches:[{Id:25 Size:49152 Type:Data Level:1} {Id:25 Size:32768 Type:Instruction Level:1} {Id:25 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:12 Threads:[104 48] Caches:[{Id:12 Size:49152 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:26 Threads:[106 50] Caches:[{Id:26 Size:49152 Type:Data Level:1} {Id:26 Size:32768 Type:Instruction Level:1} {Id:26 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:13 Threads:[108 52] Caches:[{Id:13 Size:49152 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:27 Threads:[110 54] Caches:[{Id:27 Size:49152 Type:Data Level:1} {Id:27 Size:32768 Type:Instruction Level:1} {Id:27 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:3 Threads:[12 68] Caches:[{Id:3 Size:49152 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:17 Threads:[14 70] Caches:[{Id:17 Size:49152 Type:Data Level:1} {Id:17 Size:32768 Type:Instruction Level:1} {Id:17 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:4 Threads:[16 72] Caches:[{Id:4 Size:49152 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:18 Threads:[18 74] Caches:[{Id:18 Size:49152 Type:Data Level:1} {Id:18 Size:32768 Type:Instruction Level:1} {Id:18 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:14 Threads:[2 58] Caches:[{Id:14 Size:49152 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:5 Threads:[20 76] Caches:[{Id:5 Size:49152 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:19 Threads:[22 78] Caches:[{Id:19 Size:49152 Type:Data Level:1} {Id:19 Size:32768 Type:Instruction Level:1} {Id:19 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:6 Threads:[24 80] Caches:[{Id:6 Size:49152 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:20 Threads:[26 82] Caches:[{Id:20 Size:49152 Type:Data Level:1} {Id:20 Size:32768 Type:Instruction Level:1} {Id:20 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:7 Threads:[28 84] Caches:[{Id:7 Size:49152 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:21 Threads:[30 86] Caches:[{Id:21 Size:49152 Type:Data Level:1} {Id:21 Size:32768 Type:Instruction Level:1} {Id:21 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:8 Threads:[32 88] Caches:[{Id:8 Size:49152 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:22 Threads:[34 90] Caches:[{Id:22 Size:49152 Type:Data Level:1} {Id:22 Size:32768 Type:Instruction Level:1} {Id:22 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:9 Threads:[36 92] Caches:[{Id:9 Size:49152 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:23 Threads:[38 94] Caches:[{Id:23 Size:49152 Type:Data Level:1} {Id:23 Size:32768 Type:Instruction Level:1} {Id:23 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:1 Threads:[4 60] Caches:[{Id:1 Size:49152 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:10 Threads:[40 96] Caches:[{Id:10 Size:49152 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:24 Threads:[42 98] Caches:[{Id:24 Size:49152 Type:Data Level:1} {Id:24 Size:32768 Type:Instruction Level:1} {Id:24 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:15 Threads:[6 62] Caches:[{Id:15 Size:49152 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:2 Threads:[64 8] Caches:[{Id:2 Size:49152 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0}] Caches:[{Id:0 Size:44040192 Type:Unified Level:3}]} {Id:1 Memory:135231913984 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[1 57] Caches:[{Id:64 Size:49152 Type:Data Level:1} {Id:64 Size:32768 Type:Instruction Level:1} {Id:64 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:11 Threads:[101 45] Caches:[{Id:75 Size:49152 Type:Data Level:1} {Id:75 Size:32768 Type:Instruction Level:1} {Id:75 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:25 Threads:[103 47] Caches:[{Id:89 Size:49152 Type:Data Level:1} {Id:89 Size:32768 Type:Instruction Level:1} {Id:89 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:12 Threads:[105 49] Caches:[{Id:76 Size:49152 Type:Data Level:1} {Id:76 Size:32768 Type:Instruction Level:1} {Id:76 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:26 Threads:[107 51] Caches:[{Id:90 Size:49152 Type:Data Level:1} {Id:90 Size:32768 Type:Instruction Level:1} {Id:90 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:13 Threads:[109 53] Caches:[{Id:77 Size:49152 Type:Data Level:1} {Id:77 Size:32768 Type:Instruction Level:1} {Id:77 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:16 Threads:[11 67] Caches:[{Id:80 Size:49152 Type:Data Level:1} {Id:80 Size:32768 Type:Instruction Level:1} {Id:80 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:27 Threads:[111 55] Caches:[{Id:91 Size:49152 Type:Data Level:1} {Id:91 Size:32768 Type:Instruction Level:1} {Id:91 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:3 Threads:[13 69] Caches:[{Id:67 Size:49152 Type:Data Level:1} {Id:67 Size:32768 Type:Instruction Level:1} {Id:67 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:17 Threads:[15 71] Caches:[{Id:81 Size:49152 Type:Data Level:1} {Id:81 Size:32768 Type:Instruction Level:1} {Id:81 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:4 Threads:[17 73] Caches:[{Id:68 Size:49152 Type:Data Level:1} {Id:68 Size:32768 Type:Instruction Level:1} {Id:68 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:18 Threads:[19 75] Caches:[{Id:82 Size:49152 Type:Data Level:1} {Id:82 Size:32768 Type:Instruction Level:1} {Id:82 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:5 Threads:[21 77] Caches:[{Id:69 Size:49152 Type:Data Level:1} {Id:69 Size:32768 Type:Instruction Level:1} {Id:69 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:19 Threads:[23 79] Caches:[{Id:83 Size:49152 Type:Data Level:1} {Id:83 Size:32768 Type:Instruction Level:1} {Id:83 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:6 Threads:[25 81] Caches:[{Id:70 Size:49152 Type:Data Level:1} {Id:70 Size:32768 Type:Instruction Level:1} {Id:70 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:20 Threads:[27 83] Caches:[{Id:84 Size:49152 Type:Data Level:1} {Id:84 Size:32768 Type:Instruction Level:1} {Id:84 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:7 Threads:[29 85] Caches:[{Id:71 Size:49152 Type:Data Level:1} {Id:71 Size:32768 Type:Instruction Level:1} {Id:71 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:14 Threads:[3 59] Caches:[{Id:78 Size:49152 Type:Data Level:1} {Id:78 Size:32768 Type:Instruction Level:1} {Id:78 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:21 Threads:[31 87] Caches:[{Id:85 Size:49152 Type:Data Level:1} {Id:85 Size:32768 Type:Instruction Level:1} {Id:85 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:8 Threads:[33 89] Caches:[{Id:72 Size:49152 Type:Data Level:1} {Id:72 Size:32768 Type:Instruction Level:1} {Id:72 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:22 Threads:[35 91] Caches:[{Id:86 Size:49152 Type:Data Level:1} {Id:86 Size:32768 Type:Instruction Level:1} {Id:86 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:9 Threads:[37 93] Caches:[{Id:73 Size:49152 Type:Data Level:1} {Id:73 Size:32768 Type:Instruction Level:1} {Id:73 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:23 Threads:[39 95] Caches:[{Id:87 Size:49152 Type:Data Level:1} {Id:87 Size:32768 Type:Instruction Level:1} {Id:87 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:10 Threads:[41 97] Caches:[{Id:74 Size:49152 Type:Data Level:1} {Id:74 Size:32768 Type:Instruction Level:1} {Id:74 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:24 Threads:[43 99] Caches:[{Id:88 Size:49152 Type:Data Level:1} {Id:88 Size:32768 Type:Instruction Level:1} {Id:88 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:1 Threads:[5 61] Caches:[{Id:65 Size:49152 Type:Data Level:1} {Id:65 Size:32768 Type:Instruction Level:1} {Id:65 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:15 Threads:[63 7] Caches:[{Id:79 Size:49152 Type:Data Level:1} {Id:79 Size:32768 Type:Instruction Level:1} {Id:79 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1} {Id:2 Threads:[65 9] Caches:[{Id:66 Size:49152 Type:Data Level:1} {Id:66 Size:32768 Type:Instruction Level:1} {Id:66 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:1}] Caches:[{Id:1 Size:44040192 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.844701 8631 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.844819 8631 manager.go:228] Version: {KernelVersion:4.18.0-372.40.1.el8_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 412.86.202301061548-0 (Ootpa) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845062 8631 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845112 8631 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName:/system.slice KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[cpu:{i:{value:500 scale:-3} d:{Dec:} s:500m Format:DecimalSI} ephemeral-storage:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI} memory:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:4096 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845127 8631 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845135 8631 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845145 8631 manager.go:127] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845159 8631 server.go:64] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845236 8631 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.845275 8631 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.849132 8631 remote_runtime.go:139] "Using CRI v1 runtime API" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.849155 8631 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.849903 8631 remote_image.go:95] "Using CRI v1 image API" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.849920 8631 server.go:1136] "Using root directory" path="/var/lib/kubelet" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850424 8631 kubelet.go:393] "Attempting to sync node with API server" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850439 8631 kubelet.go:282] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850448 8631 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850456 8631 kubelet.go:293] "Adding apiserver pod source" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850463 8631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.850926 8631 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="cri-o" version="1.25.1-5.rhaos4.12.git6005903.el8" apiVersion="v1" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851102 8631 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851538 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851548 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851559 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851565 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851570 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851575 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851580 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cinder" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851586 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851595 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851600 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851606 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851612 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851617 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851626 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851632 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851637 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851642 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851649 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851655 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851660 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851666 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851678 8631 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851764 8631 server.go:1175] "Started kubelet" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.851982 8631 server.go:155] "Starting to listen" address="192.168.18.12" port=10250 Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:27.852195 8631 kubelet.go:1333] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Kubernetes Kubelet. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Reached target Multi-User System. -- Subject: Unit multi-user.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit multi-user.target has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.853929 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.853948 8631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.853962 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-01-23 18:46:05 +0000 UTC, rotation deadline is 2023-01-23 17:54:47.431686098 +0000 UTC Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.853974 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 1h39m19.577713136s for next certificate rotation Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.854010 8631 volume_manager.go:291] "The desired_state_of_world populator starts" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.854019 8631 volume_manager.go:293] "Starting Kubelet Volume Manager" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.854042 8631 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Reached target Graphical Interface. -- Subject: Unit graphical.target has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit graphical.target has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.854340 8631 server.go:438] "Adding debug handlers to kubelet server" Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.854391876Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=dbb17fa2-49e7-4fce-afed-2b9717c7fd30 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:27.854549682Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b not found" id=dbb17fa2-49e7-4fce-afed-2b9717c7fd30 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.855899 8631 factory.go:153] Registering CRI-O factory Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.855912 8631 factory.go:55] Registering systemd factory Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.855961 8631 factory.go:103] Registering Raw factory Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.856007 8631 manager.go:1201] Started watching for new ooms in manager Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Starting Update UTMP about System Runlevel Changes... -- Subject: Unit systemd-update-utmp-runlevel.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-update-utmp-runlevel.service has begun starting up. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.860357 8631 manager.go:302] Starting recovery of all containers Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Started Update UTMP about System Runlevel Changes. -- Subject: Unit systemd-update-utmp-runlevel.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-update-utmp-runlevel.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Startup finished in 4.475s (kernel) + 1min 41.315s (initrd) + 3min 27.955s (userspace) = 5min 13.747s. -- Subject: System start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- All system services necessary queued for starting at boot have been -- started. Note that this does not mean that the machine is now idle as services -- might still be busy with completing start-up. -- -- Kernel start-up required 4475802 microseconds. -- -- Initial RAM disk start-up required 101315756 microseconds. -- -- Userspace start-up required 207955462 microseconds. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: systemd-update-utmp-runlevel.service: Consumed 4ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-update-utmp-runlevel.service completed and consumed the indicated resources. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.884076 8631 manager.go:307] Recovery completed Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.922795 8631 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.922813 8631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.922823 8631 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.923088 8631 policy_none.go:49] "None policy: Start" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.923713 8631 memory_manager.go:168] "Starting memorymanager" policy="None" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.923730 8631 state_mem.go:35] "Initializing new in-memory state store" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.924348 8631 container_manager_linux.go:427] "Updating kernel flag" flag="kernel/panic" expectedValue=10 actualValue=0 Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.924413 8631 container_manager_linux.go:427] "Updating kernel flag" flag="vm/overcommit_memory" expectedValue=1 actualValue=0 Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods.slice. -- Subject: Unit kubepods.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable.slice. -- Subject: Unit kubepods-burstable.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort.slice. -- Subject: Unit kubepods-besteffort.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941101 8631 manager.go:273] "Starting Device Plugin manager" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941141 8631 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941148 8631 server.go:77] "Starting device plugin registration server" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941400 8631 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941460 8631 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.941468 8631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.954044 8631 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.954941 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasSufficientMemory" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.954957 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasNoDiskPressure" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.954965 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasSufficientPID" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.954978 8631 kubelet_node_status.go:72] "Attempting to register node" node="hub-master-0.workload.bos2.lab" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.964878 8631 kubelet_node_status.go:110] "Node was previously registered" node="hub-master-0.workload.bos2.lab" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.964918 8631 kubelet_node_status.go:75] "Successfully registered node" node="hub-master-0.workload.bos2.lab" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.967154 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasSufficientMemory" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.967172 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasNoDiskPressure" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.967178 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeHasSufficientPID" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.967189 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeNotReady" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.967219 8631 setters.go:545] "Node became not ready" node="hub-master-0.workload.bos2.lab" condition={Type:Ready Status:False LastHeartbeatTime:2023-01-23 16:15:27.967182481 +0000 UTC m=+0.388907436 LastTransitionTime:2023-01-23 16:15:27.967182481 +0000 UTC m=+0.388907436 Reason:KubeletNotReady Message:PLEG is not healthy: pleg has yet to be successful} Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.968289 8631 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.995069 8631 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.995086 8631 status_manager.go:161] "Starting to sync pod status with apiserver" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:27.995098 8631 kubelet.go:2033] "Starting kubelet main sync loop" Jan 23 16:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:27.995159 8631 kubelet.go:2057] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.095960 8631 kubelet.go:2119] "SyncLoop ADD" source="file" pods=[openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab openshift-etcd/etcd-hub-master-0.workload.bos2.lab openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab] Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.095989 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096039 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096070 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096095 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096133 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096156 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.096213 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.099867 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" oldPodUID=77321459d336b7d15305c9b9a83e4081 podUID=a98cb19c-8edd-440f-beaa-65d6ce45f325 Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.102236 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" oldPodUID=5eb8d73fcd73cda1a9e34d91bb51e339 podUID=86125da6-b28c-43b7-933c-09b7766c9fc7 Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.104088 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" oldPodUID=9552ff413d8390655360ce968177c622 podUID=1529626e-45a5-4b21-9291-5033d89dd676 Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod77321459d336b7d15305c9b9a83e4081.slice. -- Subject: Unit kubepods-burstable-pod77321459d336b7d15305c9b9a83e4081.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod77321459d336b7d15305c9b9a83e4081.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.106883 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" oldPodUID=b8e918bfaafef0fc7d13026942c43171 podUID=c78e08d7-010f-403b-8f28-b36b367d37f5 Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.108767 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" oldPodUID=841c556dbc6afe45e33a42a9dd8b5492 podUID=52009f60-3c63-478a-9f11-60e8ca43f854 Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.110659 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" oldPodUID=04f654eda4f14a4bee64377a5c765343 podUID=fa3ef865-76a4-4868-88ab-5fde70b82e75 Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod5eb8d73fcd73cda1a9e34d91bb51e339.slice. -- Subject: Unit kubepods-burstable-pod5eb8d73fcd73cda1a9e34d91bb51e339.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod5eb8d73fcd73cda1a9e34d91bb51e339.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.114287 8631 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" oldPodUID=38eebeadc7ddc4d42d1de9a5e4ac69f1 podUID=a89df195-c0fa-4293-821d-03c4949f3f27 Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.115984 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod9552ff413d8390655360ce968177c622.slice. -- Subject: Unit kubepods-burstable-pod9552ff413d8390655360ce968177c622.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod9552ff413d8390655360ce968177c622.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod841c556dbc6afe45e33a42a9dd8b5492.slice. -- Subject: Unit kubepods-burstable-pod841c556dbc6afe45e33a42a9dd8b5492.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod841c556dbc6afe45e33a42a9dd8b5492.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.129262 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"coredns-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podb8e918bfaafef0fc7d13026942c43171.slice. -- Subject: Unit kubepods-burstable-podb8e918bfaafef0fc7d13026942c43171.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podb8e918bfaafef0fc7d13026942c43171.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.132424 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"kube-apiserver-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod04f654eda4f14a4bee64377a5c765343.slice. -- Subject: Unit kubepods-burstable-pod04f654eda4f14a4bee64377a5c765343.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod04f654eda4f14a4bee64377a5c765343.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.136526 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"keepalived-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod38eebeadc7ddc4d42d1de9a5e4ac69f1.slice. -- Subject: Unit kubepods-burstable-pod38eebeadc7ddc4d42d1de9a5e4ac69f1.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod38eebeadc7ddc4d42d1de9a5e4ac69f1.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.140939 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.146217 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"haproxy-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:28.151996 8631 kubelet.go:1735] "Failed creating a mirror pod for" err="pods \"etcd-hub-master-0.workload.bos2.lab\" already exists" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154699 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/04f654eda4f14a4bee64377a5c765343-run-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154722 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-script-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154738 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-resource-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154778 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nm-resolv\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-nm-resolv\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154806 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-data-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154836 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-log-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154857 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-cert-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154875 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-kubeconfig\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154910 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-conf-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154935 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-resource-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154951 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-resource-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154966 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-cert-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.154982 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/841c556dbc6afe45e33a42a9dd8b5492-run-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155002 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-resource-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155021 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-resource-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155037 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-usr-local-bin\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155054 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-cert-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155073 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-kubeconfigvarlib\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155087 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-conf-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155102 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-chroot-host\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155124 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-cert-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155145 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-resource-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155161 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-chroot-host\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155175 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-resource-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155196 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfig\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155231 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfigvarlib\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155257 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-audit-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155272 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-conf-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.155288 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-static-pod-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.255928 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfig\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.255957 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfigvarlib\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.255977 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-audit-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.255994 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-conf-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256012 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-static-pod-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256028 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfigvarlib\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256031 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-resource-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256064 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-resource-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256081 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"script-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-script-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256101 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-resource-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256105 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-audit-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256116 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"nm-resolv\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-nm-resolv\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256128 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-conf-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256133 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-data-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256140 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-resource-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256151 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-log-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256152 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-static-pod-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256114 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-kubeconfig\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256177 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/04f654eda4f14a4bee64377a5c765343-run-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256177 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"script-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-script-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256169 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-log-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256186 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-data-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256215 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-cert-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256228 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"nm-resolv\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-nm-resolv\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256235 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-cert-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256236 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-kubeconfig\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256253 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-kubeconfig\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256292 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-conf-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256322 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-conf-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256331 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/04f654eda4f14a4bee64377a5c765343-run-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256350 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-resource-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256365 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-resource-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256375 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-resource-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256380 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-cert-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256396 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9552ff413d8390655360ce968177c622-resource-dir\") pod \"kube-apiserver-hub-master-0.workload.bos2.lab\" (UID: \"9552ff413d8390655360ce968177c622\") " pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256396 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/841c556dbc6afe45e33a42a9dd8b5492-run-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256419 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-resource-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256433 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-resource-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256447 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-usr-local-bin\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256462 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-cert-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256461 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-cert-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256477 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-kubeconfigvarlib\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256492 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-conf-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256500 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5eb8d73fcd73cda1a9e34d91bb51e339-resource-dir\") pod \"coredns-hub-master-0.workload.bos2.lab\" (UID: \"5eb8d73fcd73cda1a9e34d91bb51e339\") " pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256507 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-chroot-host\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256507 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-dir\" (UniqueName: \"kubernetes.io/empty-dir/841c556dbc6afe45e33a42a9dd8b5492-run-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256523 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77321459d336b7d15305c9b9a83e4081-resource-dir\") pod \"openshift-kube-scheduler-hub-master-0.workload.bos2.lab\" (UID: \"77321459d336b7d15305c9b9a83e4081\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256522 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-cert-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256535 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-usr-local-bin\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256542 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-resource-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256546 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"conf-dir\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-conf-dir\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256549 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubeconfigvarlib\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-kubeconfigvarlib\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256556 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38eebeadc7ddc4d42d1de9a5e4ac69f1-cert-dir\") pod \"etcd-hub-master-0.workload.bos2.lab\" (UID: \"38eebeadc7ddc4d42d1de9a5e4ac69f1\") " pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256559 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-chroot-host\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256569 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8e918bfaafef0fc7d13026942c43171-cert-dir\") pod \"kube-controller-manager-hub-master-0.workload.bos2.lab\" (UID: \"b8e918bfaafef0fc7d13026942c43171\") " pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256577 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/841c556dbc6afe45e33a42a9dd8b5492-chroot-host\") pod \"keepalived-hub-master-0.workload.bos2.lab\" (UID: \"841c556dbc6afe45e33a42a9dd8b5492\") " pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256582 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-resource-dir\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.256586 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"chroot-host\" (UniqueName: \"kubernetes.io/host-path/04f654eda4f14a4bee64377a5c765343-chroot-host\") pod \"haproxy-hub-master-0.workload.bos2.lab\" (UID: \"04f654eda4f14a4bee64377a5c765343\") " pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.416657 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.417218602Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/POD" id=eb48a803-0b1e-43d5-b355-326719756841 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.417432085Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.430567 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.430896328Z" level=info msg="Running pod sandbox: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/POD" id=64d54f4a-8bb6-435d-ab59-a7b2a2bf8935 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.430930693Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.433239 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.433579557Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/POD" id=222a03b8-ebe1-4b1a-bf7a-60f0b878acfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.433603864Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.436760 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.436938448Z" level=info msg="Running pod sandbox: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/POD" id=6927d72b-cbd9-477b-97bd-863c72a3206f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.436960621Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.441210 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.441361516Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/POD" id=11932b3d-3ea5-4b52-a518-d08d4a359e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.441384621Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.446713 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.446874153Z" level=info msg="Running pod sandbox: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/POD" id=e0069ae5-1ae2-4e10-8292-0e250641233b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.446901579Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.452245 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.460302284Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/POD" id=71979e85-2849-46ab-a4fc-50a9cd83da6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:28.460338552Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.851528 8631 apiserver.go:52] "Watching apiserver" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.859948 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab openshift-route-controller-manager/route-controller-manager-5fdd49db4f-ftmvb openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab openshift-etcd/revision-pruner-9-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/revision-pruner-8-hub-master-0.workload.bos2.lab openshift-multus/multus-additional-cni-plugins-7ks6h openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/installer-6-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/installer-5-hub-master-0.workload.bos2.lab openshift-etcd/revision-pruner-10-hub-master-0.workload.bos2.lab openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab openshift-dns/node-resolver-9bshd openshift-network-diagnostics/network-check-target-qs9w4 openshift-kube-apiserver/revision-pruner-6-hub-master-0.workload.bos2.lab openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab openshift-etcd/installer-10-hub-master-0.workload.bos2.lab openshift-machine-api/ironic-proxy-nhh2z openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab openshift-kube-scheduler/revision-pruner-6-hub-master-0.workload.bos2.lab openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab openshift-multus/network-metrics-daemon-dzwx9 openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab openshift-ingress-canary/ingress-canary-7v8f9 openshift-machine-config-operator/machine-config-daemon-jkffc openshift-kube-controller-manager/revision-pruner-6-hub-master-0.workload.bos2.lab openshift-monitoring/node-exporter-pbh26 openshift-authentication/oauth-openshift-868d5f6bf8-ttp4c openshift-etcd/installer-9-hub-master-0.workload.bos2.lab openshift-image-registry/node-ca-2j9w6 openshift-kube-apiserver/installer-6-hub-master-0.workload.bos2.lab openshift-machine-config-operator/machine-config-server-vpsv9 openshift-kube-controller-manager/revision-pruner-7-hub-master-0.workload.bos2.lab openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab openshift-kube-scheduler/installer-6-hub-master-0.workload.bos2.lab openshift-kube-scheduler/installer-7-hub-master-0.workload.bos2.lab openshift-ovn-kubernetes/ovnkube-node-897lw openshift-ovn-kubernetes/ovnkube-master-fld8m openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab openshift-kube-controller-manager/installer-8-hub-master-0.workload.bos2.lab openshift-kube-scheduler/revision-pruner-7-hub-master-0.workload.bos2.lab openshift-multus/multus-cdt6c openshift-cluster-node-tuning-operator/tuned-4pckj openshift-etcd/etcd-hub-master-0.workload.bos2.lab openshift-controller-manager/controller-manager-876b6ffdf-hrzw7 openshift-kube-apiserver/installer-4-hub-master-0.workload.bos2.lab openshift-oauth-apiserver/apiserver-86c7cf6467-v5ckj openshift-apiserver/apiserver-746c4bf98c-r7nkz openshift-dns/dns-default-srzv5 openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab openshift-kube-apiserver/revision-pruner-7-hub-master-0.workload.bos2.lab] Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.859978 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860041 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860088 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860122 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860331 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860515 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.860582 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.861271 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.864097 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865542 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865621 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865661 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865698 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865744 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865787 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.865822 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866027 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866111 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866419 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866492 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866568 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866668 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866721 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866781 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866829 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866868 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866918 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.866963 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867001 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867043 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867085 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867134 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867172 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867214 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867255 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867322 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867366 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867417 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867511 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867774 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.867950 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.868024 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.868119 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.868229 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.868403 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.868535 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podb6c2cdc5_967e_4062_b6e6_f6cf372cc21c.slice. -- Subject: Unit kubepods-burstable-podb6c2cdc5_967e_4062_b6e6_f6cf372cc21c.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podb6c2cdc5_967e_4062_b6e6_f6cf372cc21c.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod0fdadbfc_e471_4e10_97e8_80b8e881aec6.slice. -- Subject: Unit kubepods-burstable-pod0fdadbfc_e471_4e10_97e8_80b8e881aec6.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod0fdadbfc_e471_4e10_97e8_80b8e881aec6.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod94cb9be9_32f4_413c_9fdf_a6e9307ff410.slice. -- Subject: Unit kubepods-burstable-pod94cb9be9_32f4_413c_9fdf_a6e9307ff410.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod94cb9be9_32f4_413c_9fdf_a6e9307ff410.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podfc516524_2ee1_45e5_8b33_0266acf098d1.slice. -- Subject: Unit kubepods-burstable-podfc516524_2ee1_45e5_8b33_0266acf098d1.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podfc516524_2ee1_45e5_8b33_0266acf098d1.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod612bc2d6_261c_4dc3_9902_489a4589ec9b.slice. -- Subject: Unit kubepods-burstable-pod612bc2d6_261c_4dc3_9902_489a4589ec9b.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod612bc2d6_261c_4dc3_9902_489a4589ec9b.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod839425af_4ad1_4627_b58f_20197745cb4a.slice. -- Subject: Unit kubepods-burstable-pod839425af_4ad1_4627_b58f_20197745cb4a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod839425af_4ad1_4627_b58f_20197745cb4a.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod3a8bb7cc_95f9_45d8_bb73_c7ddcdcbc28e.slice. -- Subject: Unit kubepods-burstable-pod3a8bb7cc_95f9_45d8_bb73_c7ddcdcbc28e.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod3a8bb7cc_95f9_45d8_bb73_c7ddcdcbc28e.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podd7b22547_215c_4758_8154_a3bfc577ec12.slice. -- Subject: Unit kubepods-burstable-podd7b22547_215c_4758_8154_a3bfc577ec12.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podd7b22547_215c_4758_8154_a3bfc577ec12.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod16a4fd86_c6fa_40ea_aa9b_a2f91d9c275b.slice. -- Subject: Unit kubepods-burstable-pod16a4fd86_c6fa_40ea_aa9b_a2f91d9c275b.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod16a4fd86_c6fa_40ea_aa9b_a2f91d9c275b.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod16d2550a_6aa8_453b_9d72_f50466ef11b2.slice. -- Subject: Unit kubepods-burstable-pod16d2550a_6aa8_453b_9d72_f50466ef11b2.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod16d2550a_6aa8_453b_9d72_f50466ef11b2.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podff6a907c_8dc5_4524_b928_d97ba7b430c3.slice. -- Subject: Unit kubepods-burstable-podff6a907c_8dc5_4524_b928_d97ba7b430c3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podff6a907c_8dc5_4524_b928_d97ba7b430c3.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod2284ac10_60cf_4768_bd24_3ea63b730ce6.slice. -- Subject: Unit kubepods-burstable-pod2284ac10_60cf_4768_bd24_3ea63b730ce6.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod2284ac10_60cf_4768_bd24_3ea63b730ce6.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:28.953631 8631 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2284ac10_60cf_4768_bd24_3ea63b730ce6.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2284ac10_60cf_4768_bd24_3ea63b730ce6.slice: no such file or directory Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod16c1efa7_495c_45d5_b9c1_00d078cb4114.slice. -- Subject: Unit kubepods-burstable-pod16c1efa7_495c_45d5_b9c1_00d078cb4114.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod16c1efa7_495c_45d5_b9c1_00d078cb4114.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960187 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-certs\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960294 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s65s6\" (UniqueName: \"kubernetes.io/projected/ff6a907c-8dc5-4524-b928-d97ba7b430c3-kube-api-access-s65s6\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960325 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-etc-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960345 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kubelet-dir\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960405 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-system-cni-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960434 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960463 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-os-release\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960484 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ssqt\" (UniqueName: \"kubernetes.io/projected/dd7e23a1-2620-491c-a453-b41708d2e0d7-kube-api-access-2ssqt\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960501 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-systemd-units\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960519 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kube-api-access\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960557 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960591 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/612bc2d6-261c-4dc3-9902-489a4589ec9b-rootfs\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960608 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-metrics-tls\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960624 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kubelet-dir\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960641 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-slash\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960663 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htrb\" (UniqueName: \"kubernetes.io/projected/5ced4aec-1711-4abf-825a-c546047148b7-kube-api-access-5htrb\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960684 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-cookie-secret\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960699 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-ca\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960715 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ced4aec-1711-4abf-825a-c546047148b7-serviceca\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960734 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-system-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960756 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960772 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tsmq\" (UniqueName: \"kubernetes.io/projected/0dd28320-8b9c-4b86-baca-8c1d561a962c-kube-api-access-8tsmq\") pod \"ingress-canary-7v8f9\" (UID: \"0dd28320-8b9c-4b86-baca-8c1d561a962c\") " pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960789 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovnkube-config\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960814 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnhw\" (UniqueName: \"kubernetes.io/projected/0fdadbfc-e471-4e10-97e8-80b8e881aec6-kube-api-access-9gnhw\") pod \"network-check-target-qs9w4\" (UID: \"0fdadbfc-e471-4e10-97e8-80b8e881aec6\") " pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960834 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-env-overrides\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960869 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-run-systemd-system\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960894 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-config-volume\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960915 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwhz\" (UniqueName: \"kubernetes.io/projected/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14-kube-api-access-mbwhz\") pod \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab\" (UID: \"7cca1a4c-e8cc-4938-9e14-a4d8d979ad14\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960938 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovnkube-config\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960960 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-os-release\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.960981 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-multus-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961000 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-sys\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961022 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-systemd-units\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961048 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ced4aec-1711-4abf-825a-c546047148b7-host\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961067 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-netns\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961086 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/839425af-4ad1-4627-b58f-20197745cb4a-hosts-file\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961101 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-log-socket\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961125 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-node-metrics-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961144 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961171 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5lk\" (UniqueName: \"kubernetes.io/projected/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b-kube-api-access-fg5lk\") pod \"etcd-guard-hub-master-0.workload.bos2.lab\" (UID: \"16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b\") " pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961187 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/dd7e23a1-2620-491c-a453-b41708d2e0d7-metal3-ironic-tls\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961202 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961234 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961268 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kubelet-dir\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961291 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-var-lib-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961310 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-ovn\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961326 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-host\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961344 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961370 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961393 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cnibin\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961408 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961447 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-binary-copy\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961470 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cni-binary-copy\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961496 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-node-bootstrap-token\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961522 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-env-overrides\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961541 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961564 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhsm\" (UniqueName: \"kubernetes.io/projected/94cb9be9-32f4-413c-9fdf-a6e9307ff410-kube-api-access-lmhsm\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961586 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-node-log\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961601 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-ca\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961620 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kube-api-access\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961636 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-root\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961659 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-wtmp\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961678 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-bin\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961694 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-sys\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961710 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktgq\" (UniqueName: \"kubernetes.io/projected/16d2550a-6aa8-453b-9d72-f50466ef11b2-kube-api-access-jktgq\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961728 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kube-api-access\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961751 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-textfile\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961773 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-var-lib-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961795 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcqb\" (UniqueName: \"kubernetes.io/projected/839425af-4ad1-4627-b58f-20197745cb4a-kube-api-access-xpcqb\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961811 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-netd\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961826 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-ovn\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961849 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ctm\" (UniqueName: \"kubernetes.io/projected/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-kube-api-access-s4ctm\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961865 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-run-dbus\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961896 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6bg\" (UniqueName: \"kubernetes.io/projected/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-kube-api-access-5d6bg\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961919 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff6a907c-8dc5-4524-b928-d97ba7b430c3-metrics-client-ca\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961940 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961958 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.961975 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc516524-2ee1-45e5-8b33-0266acf098d1-metrics-certs\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962042 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7nz\" (UniqueName: \"kubernetes.io/projected/2284ac10-60cf-4768-bd24-3ea63b730ce6-kube-api-access-zk7nz\") pod \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab\" (UID: \"2284ac10-60cf-4768-bd24-3ea63b730ce6\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962071 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svqk6\" (UniqueName: \"kubernetes.io/projected/409cdcf0-1eab-47ad-9389-ad5809e748ff-kube-api-access-svqk6\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962099 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-etc\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962124 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-lib-modules\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962148 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cnibin\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962171 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fggn\" (UniqueName: \"kubernetes.io/projected/612bc2d6-261c-4dc3-9902-489a4589ec9b-kube-api-access-2fggn\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962211 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-master-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-master-metrics-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962254 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967j5\" (UniqueName: \"kubernetes.io/projected/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-kube-api-access-967j5\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962274 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk9j\" (UniqueName: \"kubernetes.io/projected/d7b22547-215c-4758-8154-a3bfc577ec12-kube-api-access-6vk9j\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962299 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-lib-tuned-profiles-data\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962318 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4kg\" (UniqueName: \"kubernetes.io/projected/fc516524-2ee1-45e5-8b33-0266acf098d1-kube-api-access-db4kg\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962339 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-proxy-tls\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962356 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-etc-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962380 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-tls\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962405 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz65\" (UniqueName: \"kubernetes.io/projected/16c1efa7-495c-45d5-b9c1-00d078cb4114-kube-api-access-ldz65\") pod \"kube-apiserver-guard-hub-master-0.workload.bos2.lab\" (UID: \"16c1efa7-495c-45d5-b9c1-00d078cb4114\") " pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962427 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd7e23a1-2620-491c-a453-b41708d2e0d7-trusted-ca\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.962461 8631 reconciler.go:169] "Reconciler: start to sync state" Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod7cca1a4c_e8cc_4938_9e14_a4d8d979ad14.slice. -- Subject: Unit kubepods-burstable-pod7cca1a4c_e8cc_4938_9e14_a4d8d979ad14.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod7cca1a4c_e8cc_4938_9e14_a4d8d979ad14.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965045 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-trusted-ca-bundle podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d170 outerVolumeSpecName:v4-0-config-system-trusted-ca-bundle pod:0xc0015ddc00 volumeGidValue: devicePath: mounter:0xc001ae8900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965076 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-service-ca podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d188 outerVolumeSpecName:v4-0-config-system-service-ca pod:0xc001534800 volumeGidValue: devicePath: mounter:0xc001ae8d80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965099 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-cliconfig podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d1e8 outerVolumeSpecName:v4-0-config-system-cliconfig pod:0xc001535c00 volumeGidValue: devicePath: mounter:0xc001ae9200 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965122 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-audit-policies podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d200 outerVolumeSpecName:audit-policies pod:0xc001620400 volumeGidValue: devicePath: mounter:0xc001ae9680 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965153 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/projected/02502f0c-09a2-4a94-b4f4-92a060050951-kube-api-access-sspj2 podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d218 outerVolumeSpecName:kube-api-access-sspj2 pod:0xc001620800 volumeGidValue: devicePath: mounter:0xc000c5dc40 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965188 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-ocp-branding-template podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d230 outerVolumeSpecName:v4-0-config-system-ocp-branding-template pod:0xc001621000 volumeGidValue: devicePath: mounter:0xc001ae9b00 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965220 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-login podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d290 outerVolumeSpecName:v4-0-config-user-template-login pod:0xc001621400 volumeGidValue: devicePath: mounter:0xc001e98000 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965244 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-provider-selection podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d2a8 outerVolumeSpecName:v4-0-config-user-template-provider-selection pod:0xc001621800 volumeGidValue: devicePath: mounter:0xc001e98480 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965267 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-serving-cert podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d2c0 outerVolumeSpecName:v4-0-config-system-serving-cert pod:0xc001e9a000 volumeGidValue: devicePath: mounter:0xc001e98900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965294 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-router-certs podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d2d8 outerVolumeSpecName:v4-0-config-system-router-certs pod:0xc001e9a400 volumeGidValue: devicePath: mounter:0xc001e98d80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965322 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-error podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d380 outerVolumeSpecName:v4-0-config-user-template-error pod:0xc001e9a800 volumeGidValue: devicePath: mounter:0xc001e99200 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965347 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-session podName:02502f0c-09a2-4a94-b4f4-92a060050951 volumeSpec:0xc000d8d3b0 outerVolumeSpecName:v4-0-config-system-session pod:0xc001e9ac00 volumeGidValue: devicePath: mounter:0xc001e99680 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965406 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-image-import-ca podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d3f8 outerVolumeSpecName:image-import-ca pod:0xc001e9b800 volumeGidValue: devicePath: mounter:0xc001e99b00 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965431 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-audit podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d410 outerVolumeSpecName:audit pod:0xc001e9bc00 volumeGidValue: devicePath: mounter:0xc001e9e000 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965456 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-trusted-ca-bundle podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d428 outerVolumeSpecName:trusted-ca-bundle pod:0xc001ea8000 volumeGidValue: devicePath: mounter:0xc001e9e480 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965481 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-config podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d458 outerVolumeSpecName:config pod:0xc001ea8400 volumeGidValue: devicePath: mounter:0xc001e9e900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965500 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-serving-ca podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d470 outerVolumeSpecName:etcd-serving-ca pod:0xc001ea8800 volumeGidValue: devicePath: mounter:0xc001e9ed80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965521 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/projected/14819588-d3b0-492e-8c78-4bbee02f2eca-kube-api-access-tcctn podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d488 outerVolumeSpecName:kube-api-access-tcctn pod:0xc001ea8c00 volumeGidValue: devicePath: mounter:0xc0007533c0 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965542 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-serving-cert podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d4a0 outerVolumeSpecName:serving-cert pod:0xc001ea9000 volumeGidValue: devicePath: mounter:0xc001e9f200 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965562 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-encryption-config podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d4b8 outerVolumeSpecName:encryption-config pod:0xc001ea9400 volumeGidValue: devicePath: mounter:0xc001e9f680 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965582 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-client podName:14819588-d3b0-492e-8c78-4bbee02f2eca volumeSpec:0xc000d8d4d0 outerVolumeSpecName:etcd-client pod:0xc001ea9800 volumeGidValue: devicePath: mounter:0xc001e9fb00 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965763 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-proxy-ca-bundles podName:4b289996-b213-413c-a468-f51e7e3eb0e4 volumeSpec:0xc000d8d698 outerVolumeSpecName:proxy-ca-bundles pod:0xc001eb7800 volumeGidValue: devicePath: mounter:0xc001ebc480 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965789 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-client-ca podName:4b289996-b213-413c-a468-f51e7e3eb0e4 volumeSpec:0xc000d8d6c8 outerVolumeSpecName:client-ca pod:0xc001eb7c00 volumeGidValue: devicePath: mounter:0xc001ebc900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965810 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-config podName:4b289996-b213-413c-a468-f51e7e3eb0e4 volumeSpec:0xc000d8d6e0 outerVolumeSpecName:config pod:0xc001ebe000 volumeGidValue: devicePath: mounter:0xc001ebcd80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965832 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/projected/4b289996-b213-413c-a468-f51e7e3eb0e4-kube-api-access-jhngf podName:4b289996-b213-413c-a468-f51e7e3eb0e4 volumeSpec:0xc000d8d6f8 outerVolumeSpecName:kube-api-access-jhngf pod:0xc001ebe400 volumeGidValue: devicePath: mounter:0xc001eba340 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965853 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/4b289996-b213-413c-a468-f51e7e3eb0e4-serving-cert podName:4b289996-b213-413c-a468-f51e7e3eb0e4 volumeSpec:0xc000d8d710 outerVolumeSpecName:serving-cert pod:0xc001ebe800 volumeGidValue: devicePath: mounter:0xc001ebd200 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965926 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-config podName:673a603f-a83d-437b-bf5e-7a95a63a17fa volumeSpec:0xc000d8d7a0 outerVolumeSpecName:config pod:0xc001ec4000 volumeGidValue: devicePath: mounter:0xc001ec2480 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965950 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-client-ca podName:673a603f-a83d-437b-bf5e-7a95a63a17fa volumeSpec:0xc000d8d7b8 outerVolumeSpecName:client-ca pod:0xc001ec4400 volumeGidValue: devicePath: mounter:0xc001ec2900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965971 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/projected/673a603f-a83d-437b-bf5e-7a95a63a17fa-kube-api-access-nxzfv podName:673a603f-a83d-437b-bf5e-7a95a63a17fa volumeSpec:0xc000d8d7d0 outerVolumeSpecName:kube-api-access-nxzfv pod:0xc001ec4800 volumeGidValue: devicePath: mounter:0xc001eba900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.965995 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/673a603f-a83d-437b-bf5e-7a95a63a17fa-serving-cert podName:673a603f-a83d-437b-bf5e-7a95a63a17fa volumeSpec:0xc000d8d7e8 outerVolumeSpecName:serving-cert pod:0xc001ec4c00 volumeGidValue: devicePath: mounter:0xc001ec2d80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966198 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-trusted-ca-bundle podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d980 outerVolumeSpecName:trusted-ca-bundle pod:0xc001ed3400 volumeGidValue: devicePath: mounter:0xc001ed1680 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966223 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-serving-ca podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d998 outerVolumeSpecName:etcd-serving-ca pod:0xc001ed3800 volumeGidValue: devicePath: mounter:0xc001ed1b00 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966246 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-audit-policies podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d9b0 outerVolumeSpecName:audit-policies pod:0xc001ed3c00 volumeGidValue: devicePath: mounter:0xc001ed8000 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966269 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/projected/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-kube-api-access-svp2s podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d9c8 outerVolumeSpecName:kube-api-access-svp2s pod:0xc001edc000 volumeGidValue: devicePath: mounter:0xc001ebb7c0 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966291 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-client podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d9e0 outerVolumeSpecName:etcd-client pod:0xc001edc400 volumeGidValue: devicePath: mounter:0xc001ed8480 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966317 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-serving-cert podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8d9f8 outerVolumeSpecName:serving-cert pod:0xc001edc800 volumeGidValue: devicePath: mounter:0xc001ed8900 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:28.966341 8631 reconciler.go:537] "Reconciler sync states: could not find pod information in desired state, update it in actual state" reconstructedVolume=&{volumeName:kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-encryption-config podName:c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 volumeSpec:0xc000d8da10 outerVolumeSpecName:encryption-config pod:0xc001edcc00 volumeGidValue: devicePath: mounter:0xc001ed8d80 deviceMounter: blockVolumeMapper:} Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod0dd28320_8b9c_4b86_baca_8c1d561a962c.slice. -- Subject: Unit kubepods-burstable-pod0dd28320_8b9c_4b86_baca_8c1d561a962c.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod0dd28320_8b9c_4b86_baca_8c1d561a962c.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod5ced4aec_1711_4abf_825a_c546047148b7.slice. -- Subject: Unit kubepods-burstable-pod5ced4aec_1711_4abf_825a_c546047148b7.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod5ced4aec_1711_4abf_825a_c546047148b7.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-poda88a1018_cc7c_4bd1_b3d2_0d960b53459c.slice. -- Subject: Unit kubepods-burstable-poda88a1018_cc7c_4bd1_b3d2_0d960b53459c.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-poda88a1018_cc7c_4bd1_b3d2_0d960b53459c.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:28 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-poddd7e23a1_2620_491c_a453_b41708d2e0d7.slice. -- Subject: Unit kubepods-burstable-poddd7e23a1_2620_491c_a453_b41708d2e0d7.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-poddd7e23a1_2620_491c_a453_b41708d2e0d7.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-podbf9abfd8_f6ab_41d0_9984_1c374f00d734.slice. -- Subject: Unit kubepods-podbf9abfd8_f6ab_41d0_9984_1c374f00d734.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-podbf9abfd8_f6ab_41d0_9984_1c374f00d734.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod409cdcf0_1eab_47ad_9389_ad5809e748ff.slice. -- Subject: Unit kubepods-burstable-pod409cdcf0_1eab_47ad_9389_ad5809e748ff.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod409cdcf0_1eab_47ad_9389_ad5809e748ff.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.011496961Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=6927d72b-cbd9-477b-97bd-863c72a3206f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.011877811Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=e0069ae5-1ae2-4e10-8292-0e250641233b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.012175690Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=11932b3d-3ea5-4b52-a518-d08d4a359e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.012498553Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=222a03b8-ebe1-4b1a-bf7a-60f0b878acfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.017171 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod841c556dbc6afe45e33a42a9dd8b5492.slice/crio-46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5.scope WatchSource:0}: Error finding container 46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5: Status 404 returned error can't find the container with id 46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.017298 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e918bfaafef0fc7d13026942c43171.slice/crio-f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462.scope WatchSource:0}: Error finding container f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462: Status 404 returned error can't find the container with id f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.018288 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04f654eda4f14a4bee64377a5c765343.slice/crio-1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7.scope WatchSource:0}: Error finding container 1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7: Status 404 returned error can't find the container with id 1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.018710 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9552ff413d8390655360ce968177c622.slice/crio-1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3.scope WatchSource:0}: Error finding container 1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3: Status 404 returned error can't find the container with id 1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.021309365Z" level=info msg="Ran pod sandbox 1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 with infra container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/POD" id=e0069ae5-1ae2-4e10-8292-0e250641233b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.021318603Z" level=info msg="Ran pod sandbox f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 with infra container: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/POD" id=11932b3d-3ea5-4b52-a518-d08d4a359e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.021407413Z" level=info msg="Ran pod sandbox 1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 with infra container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/POD" id=222a03b8-ebe1-4b1a-bf7a-60f0b878acfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.021461525Z" level=info msg="Ran pod sandbox 46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5 with infra container: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/POD" id=6927d72b-cbd9-477b-97bd-863c72a3206f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022074900Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=61f2e917-edbe-426e-a641-d719748081b0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022116766Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=343aa15b-8c65-4b96-af32-2f59c3140aff name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022084923Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=dcd3cd81-2742-4014-b095-03db4228d291 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022224260Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=dc1b64f3-55a1-4e2e-9b5d-be63f7776753 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022236972Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=343aa15b-8c65-4b96-af32-2f59c3140aff name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022317360Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dc1b64f3-55a1-4e2e-9b5d-be63f7776753 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022332396Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1 not found" id=61f2e917-edbe-426e-a641-d719748081b0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022378665Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1 not found" id=dcd3cd81-2742-4014-b095-03db4228d291 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.022523 8631 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022630816Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=c1dd3c7f-fb1e-458e-a90e-89e09454ebe2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022719830Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c1dd3c7f-fb1e-458e-a90e-89e09454ebe2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022744711Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=f3167d62-7793-4427-8316-c9877328486b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022814266Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f3167d62-7793-4427-8316-c9877328486b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022902653Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=616829b4-85fb-4dbe-ba4c-6e316212f141 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.022960957Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=9787ec68-f3a4-4868-9273-d8d0562062a1 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.023138289Z" level=info msg="Creating container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/verify-api-int-resolvable" id=a84546f0-a974-44e0-a69e-bb027eaa1bc2 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.023165675Z" level=info msg="Creating container: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/render-config-keepalived" id=7dc36234-54f4-40a7-a57c-e1d2acfdc196 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.023197002Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.023252049Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.025096448Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.025104322Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-pod2dd7d41b_a444_4ab3_8a7b_b6aff6fb5d50.slice. -- Subject: Unit kubepods-pod2dd7d41b_a444_4ab3_8a7b_b6aff6fb5d50.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-pod2dd7d41b_a444_4ab3_8a7b_b6aff6fb5d50.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-pod6e7703f8_c0f2_4b5d_bb68_b729d8aa90fa.slice. -- Subject: Unit kubepods-pod6e7703f8_c0f2_4b5d_bb68_b729d8aa90fa.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-pod6e7703f8_c0f2_4b5d_bb68_b729d8aa90fa.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice. -- Subject: Unit kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.068754 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-encryption-config\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.068920 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-config\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.068937 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-encryption-config\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.068930 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~secret/encryption-config: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.068974 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.068953 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-serving-ca\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069013 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-audit-policies\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069038 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-proxy-ca-bundles\") pod \"4b289996-b213-413c-a468-f51e7e3eb0e4\" (UID: \"4b289996-b213-413c-a468-f51e7e3eb0e4\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069055 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-login\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069070 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-serving-cert\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069059 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~configmap/etcd-serving-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069085 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-config\") pod \"4b289996-b213-413c-a468-f51e7e3eb0e4\" (UID: \"4b289996-b213-413c-a468-f51e7e3eb0e4\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069109 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-router-certs\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069125 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-client-ca\") pod \"4b289996-b213-413c-a468-f51e7e3eb0e4\" (UID: \"4b289996-b213-413c-a468-f51e7e3eb0e4\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069112 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~configmap/config: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069141 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxzfv\" (UniqueName: \"kubernetes.io/projected/673a603f-a83d-437b-bf5e-7a95a63a17fa-kube-api-access-nxzfv\") pod \"673a603f-a83d-437b-bf5e-7a95a63a17fa\" (UID: \"673a603f-a83d-437b-bf5e-7a95a63a17fa\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069178 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhngf\" (UniqueName: \"kubernetes.io/projected/4b289996-b213-413c-a468-f51e7e3eb0e4-kube-api-access-jhngf\") pod \"4b289996-b213-413c-a468-f51e7e3eb0e4\" (UID: \"4b289996-b213-413c-a468-f51e7e3eb0e4\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069196 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svp2s\" (UniqueName: \"kubernetes.io/projected/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-kube-api-access-svp2s\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069203 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069215 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-image-import-ca\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069209 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~secret/encryption-config: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069242 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-serving-cert\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069253 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069262 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-cliconfig\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069278 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-client\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069281 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-config" (OuterVolumeSpecName: "config") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069295 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-audit-policies\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069310 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-client\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069324 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-trusted-ca-bundle\") pod \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\" (UID: \"c8f5ce0b-5be2-49aa-ae7a-ddd7de103471\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069315 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~secret/serving-cert: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069339 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes/kubernetes.io~configmap/config: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069350 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-trusted-ca-bundle\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069356 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069365 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-serving-ca\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069380 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes/kubernetes.io~configmap/client-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069396 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes/kubernetes.io~configmap/proxy-ca-bundles: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069519 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes/kubernetes.io~secret/serving-cert: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069537 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-system-router-certs: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069559 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-user-template-login: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069576 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4b289996-b213-413c-a468-f51e7e3eb0e4" (UID: "4b289996-b213-413c-a468-f51e7e3eb0e4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069398 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b289996-b213-413c-a468-f51e7e3eb0e4-serving-cert\") pod \"4b289996-b213-413c-a468-f51e7e3eb0e4\" (UID: \"4b289996-b213-413c-a468-f51e7e3eb0e4\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069586 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-config" (OuterVolumeSpecName: "config") pod "4b289996-b213-413c-a468-f51e7e3eb0e4" (UID: "4b289996-b213-413c-a468-f51e7e3eb0e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069600 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b289996-b213-413c-a468-f51e7e3eb0e4" (UID: "4b289996-b213-413c-a468-f51e7e3eb0e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069603 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069546 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b289996-b213-413c-a468-f51e7e3eb0e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b289996-b213-413c-a468-f51e7e3eb0e4" (UID: "4b289996-b213-413c-a468-f51e7e3eb0e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069613 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-audit\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069631 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069632 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~secret/etcd-client: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069654 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-ocp-branding-template\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069676 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-serving-cert\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069676 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069683 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~projected/kube-api-access-svp2s: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069697 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-config\") pod \"673a603f-a83d-437b-bf5e-7a95a63a17fa\" (UID: \"673a603f-a83d-437b-bf5e-7a95a63a17fa\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069724 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sspj2\" (UniqueName: \"kubernetes.io/projected/02502f0c-09a2-4a94-b4f4-92a060050951-kube-api-access-sspj2\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069725 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-kube-api-access-svp2s" (OuterVolumeSpecName: "kube-api-access-svp2s") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "kube-api-access-svp2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069736 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~configmap/audit: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069748 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-session\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069746 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/673a603f-a83d-437b-bf5e-7a95a63a17fa/volumes/kubernetes.io~projected/kube-api-access-nxzfv: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069750 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~configmap/image-import-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069749 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069770 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/673a603f-a83d-437b-bf5e-7a95a63a17fa-serving-cert\") pod \"673a603f-a83d-437b-bf5e-7a95a63a17fa\" (UID: \"673a603f-a83d-437b-bf5e-7a95a63a17fa\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069783 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673a603f-a83d-437b-bf5e-7a95a63a17fa-kube-api-access-nxzfv" (OuterVolumeSpecName: "kube-api-access-nxzfv") pod "673a603f-a83d-437b-bf5e-7a95a63a17fa" (UID: "673a603f-a83d-437b-bf5e-7a95a63a17fa"). InnerVolumeSpecName "kube-api-access-nxzfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069797 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-service-ca\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069823 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcctn\" (UniqueName: \"kubernetes.io/projected/14819588-d3b0-492e-8c78-4bbee02f2eca-kube-api-access-tcctn\") pod \"14819588-d3b0-492e-8c78-4bbee02f2eca\" (UID: \"14819588-d3b0-492e-8c78-4bbee02f2eca\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069832 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~configmap/audit-policies: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069834 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes/kubernetes.io~projected/kube-api-access-jhngf: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069846 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-error\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069858 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069866 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-client-ca\") pod \"673a603f-a83d-437b-bf5e-7a95a63a17fa\" (UID: \"673a603f-a83d-437b-bf5e-7a95a63a17fa\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069869 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b289996-b213-413c-a468-f51e7e3eb0e4-kube-api-access-jhngf" (OuterVolumeSpecName: "kube-api-access-jhngf") pod "4b289996-b213-413c-a468-f51e7e3eb0e4" (UID: "4b289996-b213-413c-a468-f51e7e3eb0e4"). InnerVolumeSpecName "kube-api-access-jhngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069895 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-provider-selection\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069907 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~configmap/audit-policies: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069916 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-trusted-ca-bundle\") pod \"02502f0c-09a2-4a94-b4f4-92a060050951\" (UID: \"02502f0c-09a2-4a94-b4f4-92a060050951\") " Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069900 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~configmap/etcd-serving-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069939 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-audit" (OuterVolumeSpecName: "audit") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.069944 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069975 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes/kubernetes.io~configmap/trusted-ca-bundle: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069998 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070000 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070004 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/673a603f-a83d-437b-bf5e-7a95a63a17fa/volumes/kubernetes.io~configmap/client-ca: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070036 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnhw\" (UniqueName: \"kubernetes.io/projected/0fdadbfc-e471-4e10-97e8-80b8e881aec6-kube-api-access-9gnhw\") pod \"network-check-target-qs9w4\" (UID: \"0fdadbfc-e471-4e10-97e8-80b8e881aec6\") " pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070047 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070053 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-env-overrides\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070079 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070068 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~projected/kube-api-access-sspj2: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070106 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471" (UID: "c8f5ce0b-5be2-49aa-ae7a-ddd7de103471"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070123 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-system-serving-cert: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070132 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02502f0c-09a2-4a94-b4f4-92a060050951-kube-api-access-sspj2" (OuterVolumeSpecName: "kube-api-access-sspj2") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "kube-api-access-sspj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070152 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~projected/kube-api-access-tcctn: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070163 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070164 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070184 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14819588-d3b0-492e-8c78-4bbee02f2eca-kube-api-access-tcctn" (OuterVolumeSpecName: "kube-api-access-tcctn") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "kube-api-access-tcctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070185 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-user-template-provider-selection: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070193 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-client-ca" (OuterVolumeSpecName: "client-ca") pod "673a603f-a83d-437b-bf5e-7a95a63a17fa" (UID: "673a603f-a83d-437b-bf5e-7a95a63a17fa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.069792 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~configmap/trusted-ca-bundle: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070213 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/673a603f-a83d-437b-bf5e-7a95a63a17fa/volumes/kubernetes.io~configmap/config: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070227 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070246 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~secret/etcd-client: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070275 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-system-session: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070284 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070299 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-user-template-error: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070290 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~secret/v4-0-config-system-ocp-branding-template: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070307 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.070318 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/673a603f-a83d-437b-bf5e-7a95a63a17fa/volumes/kubernetes.io~secret/serving-cert: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070331 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070362 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/673a603f-a83d-437b-bf5e-7a95a63a17fa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "673a603f-a83d-437b-bf5e-7a95a63a17fa" (UID: "673a603f-a83d-437b-bf5e-7a95a63a17fa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070364 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070367 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070415 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-run-systemd-system\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070432 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-env-overrides\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070435 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-config" (OuterVolumeSpecName: "config") pod "673a603f-a83d-437b-bf5e-7a95a63a17fa" (UID: "673a603f-a83d-437b-bf5e-7a95a63a17fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.070449 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-config-volume\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope. -- Subject: Unit crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072613 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwhz\" (UniqueName: \"kubernetes.io/projected/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14-kube-api-access-mbwhz\") pod \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab\" (UID: \"7cca1a4c-e8cc-4938-9e14-a4d8d979ad14\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072651 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovnkube-config\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072678 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-os-release\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072728 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-multus-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.072745 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes/kubernetes.io~secret/serving-cert: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072784 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-sys\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072797 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-multus-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072796 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14819588-d3b0-492e-8c78-4bbee02f2eca" (UID: "14819588-d3b0-492e-8c78-4bbee02f2eca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072820 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-systemd-units\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072836 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-os-release\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072845 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-sys\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072846 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ced4aec-1711-4abf-825a-c546047148b7-host\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072955 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-netns\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072893 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ced4aec-1711-4abf-825a-c546047148b7-host\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072986 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/839425af-4ad1-4627-b58f-20197745cb4a-hosts-file\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072991 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-netns\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.072966 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-run-systemd-system\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.073009 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-systemd-units\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.073011 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-log-socket\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.073041 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-log-socket\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.073044 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle: clearQuota called, but quotas disabled Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.073051 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/839425af-4ad1-4627-b58f-20197745cb4a-hosts-file\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.073061 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-node-metrics-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074035 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovnkube-config\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074163 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074190 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5lk\" (UniqueName: \"kubernetes.io/projected/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b-kube-api-access-fg5lk\") pod \"etcd-guard-hub-master-0.workload.bos2.lab\" (UID: \"16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b\") " pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074222 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/dd7e23a1-2620-491c-a453-b41708d2e0d7-metal3-ironic-tls\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074244 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074268 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074290 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kubelet-dir\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074309 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-var-lib-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074330 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-ovn\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074354 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-host\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074368 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-config-volume\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074362 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "02502f0c-09a2-4a94-b4f4-92a060050951" (UID: "02502f0c-09a2-4a94-b4f4-92a060050951"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074374 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074406 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074421 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kubelet-dir\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074423 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074441 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-var-lib-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074448 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cnibin\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074468 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-ovn\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074470 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-binary-copy\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074472 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074496 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cni-binary-copy\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074510 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-host\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074449 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-run-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074520 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-node-bootstrap-token\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074563 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cnibin\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074568 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074584 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-env-overrides\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074607 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074629 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074651 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhsm\" (UniqueName: \"kubernetes.io/projected/94cb9be9-32f4-413c-9fdf-a6e9307ff410-kube-api-access-lmhsm\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074672 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-node-log\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074693 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-ca\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074715 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kube-api-access\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074734 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-root\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074759 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-wtmp\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074771 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cni-binary-copy\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074780 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-bin\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074801 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-sys\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074817 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-node-log\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074819 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-root\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074821 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-jktgq\" (UniqueName: \"kubernetes.io/projected/16d2550a-6aa8-453b-9d72-f50466ef11b2-kube-api-access-jktgq\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074833 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-env-overrides\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074859 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-bin\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074866 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kube-api-access\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074887 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-sys\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074905 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-wtmp\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074907 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-textfile\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074933 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-var-lib-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074954 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcqb\" (UniqueName: \"kubernetes.io/projected/839425af-4ad1-4627-b58f-20197745cb4a-kube-api-access-xpcqb\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074973 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-netd\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074984 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-var-lib-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074987 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-textfile\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.074992 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-ovn\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075009 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-cni-binary-copy\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075015 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-ovn\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075017 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-cni-netd\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075024 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ctm\" (UniqueName: \"kubernetes.io/projected/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-kube-api-access-s4ctm\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075043 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-run-dbus\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075062 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-5d6bg\" (UniqueName: \"kubernetes.io/projected/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-kube-api-access-5d6bg\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075081 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff6a907c-8dc5-4524-b928-d97ba7b430c3-metrics-client-ca\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075099 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-run-dbus\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075118 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075141 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075177 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc516524-2ee1-45e5-8b33-0266acf098d1-metrics-certs\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075196 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-run-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075201 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7nz\" (UniqueName: \"kubernetes.io/projected/2284ac10-60cf-4768-bd24-3ea63b730ce6-kube-api-access-zk7nz\") pod \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab\" (UID: \"2284ac10-60cf-4768-bd24-3ea63b730ce6\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075225 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-svqk6\" (UniqueName: \"kubernetes.io/projected/409cdcf0-1eab-47ad-9389-ad5809e748ff-kube-api-access-svqk6\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075246 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-etc\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075265 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-lib-modules\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075284 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cnibin\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075312 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-967j5\" (UniqueName: \"kubernetes.io/projected/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-kube-api-access-967j5\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075337 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-lib-modules\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075347 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-6vk9j\" (UniqueName: \"kubernetes.io/projected/d7b22547-215c-4758-8154-a3bfc577ec12-kube-api-access-6vk9j\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075349 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-cnibin\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075365 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/16d2550a-6aa8-453b-9d72-f50466ef11b2-etc\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075372 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-2fggn\" (UniqueName: \"kubernetes.io/projected/612bc2d6-261c-4dc3-9902-489a4589ec9b-kube-api-access-2fggn\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075428 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-master-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-master-metrics-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075455 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-lib-tuned-profiles-data\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075476 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-db4kg\" (UniqueName: \"kubernetes.io/projected/fc516524-2ee1-45e5-8b33-0266acf098d1-kube-api-access-db4kg\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075497 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-proxy-tls\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075518 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-etc-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075537 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-tls\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075554 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-etc-openvswitch\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075558 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-ldz65\" (UniqueName: \"kubernetes.io/projected/16c1efa7-495c-45d5-b9c1-00d078cb4114-kube-api-access-ldz65\") pod \"kube-apiserver-guard-hub-master-0.workload.bos2.lab\" (UID: \"16c1efa7-495c-45d5-b9c1-00d078cb4114\") " pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075583 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd7e23a1-2620-491c-a453-b41708d2e0d7-trusted-ca\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075607 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/16d2550a-6aa8-453b-9d72-f50466ef11b2-var-lib-tuned-profiles-data\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075613 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-certs\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075633 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff6a907c-8dc5-4524-b928-d97ba7b430c3-metrics-client-ca\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075653 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-s65s6\" (UniqueName: \"kubernetes.io/projected/ff6a907c-8dc5-4524-b928-d97ba7b430c3-kube-api-access-s65s6\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075675 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-etc-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075697 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-system-cni-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075719 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-ca\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075732 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075754 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-os-release\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075761 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-system-cni-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075776 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kubelet-dir\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075797 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cb9be9-32f4-413c-9fdf-a6e9307ff410-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075800 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-etc-openvswitch\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075815 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-os-release\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075819 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kubelet-dir\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075804 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-2ssqt\" (UniqueName: \"kubernetes.io/projected/dd7e23a1-2620-491c-a453-b41708d2e0d7-kube-api-access-2ssqt\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075831 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-node-metrics-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075858 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-systemd-units\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075882 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-systemd-units\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075888 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kube-api-access\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075913 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075934 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/612bc2d6-261c-4dc3-9902-489a4589ec9b-rootfs\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075949 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metal3-ironic-tls\" (UniqueName: \"kubernetes.io/secret/dd7e23a1-2620-491c-a453-b41708d2e0d7-metal3-ironic-tls\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075966 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-metrics-tls\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075968 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075986 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kubelet-dir\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.075987 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/612bc2d6-261c-4dc3-9902-489a4589ec9b-rootfs\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076006 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-slash\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076021 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kubelet-dir\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076028 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-5htrb\" (UniqueName: \"kubernetes.io/projected/5ced4aec-1711-4abf-825a-c546047148b7-kube-api-access-5htrb\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076046 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-slash\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076046 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-cookie-secret\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076079 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-ca\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076098 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ced4aec-1711-4abf-825a-c546047148b7-serviceca\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076119 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-system-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076141 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076163 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-8tsmq\" (UniqueName: \"kubernetes.io/projected/0dd28320-8b9c-4b86-baca-8c1d561a962c-kube-api-access-8tsmq\") pod \"ingress-canary-7v8f9\" (UID: \"0dd28320-8b9c-4b86-baca-8c1d561a962c\") " pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076166 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-system-cni-dir\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076175 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409cdcf0-1eab-47ad-9389-ad5809e748ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076183 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovnkube-config\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076211 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-cert\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076224 8631 reconciler.go:399] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-client-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076235 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-jhngf\" (UniqueName: \"kubernetes.io/projected/4b289996-b213-413c-a468-f51e7e3eb0e4-kube-api-access-jhngf\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076248 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-nxzfv\" (UniqueName: \"kubernetes.io/projected/673a603f-a83d-437b-bf5e-7a95a63a17fa-kube-api-access-nxzfv\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076259 8631 reconciler.go:399] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-image-import-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076270 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-serving-cert\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076283 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-svp2s\" (UniqueName: \"kubernetes.io/projected/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-kube-api-access-svp2s\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076296 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-cliconfig\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076305 8631 reconciler.go:399] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-client\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076314 8631 reconciler.go:399] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-client\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076319 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ced4aec-1711-4abf-825a-c546047148b7-serviceca\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076324 8631 reconciler.go:399] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-trusted-ca-bundle\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076333 8631 reconciler.go:399] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-trusted-ca-bundle\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076342 8631 reconciler.go:399] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-audit-policies\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076353 8631 reconciler.go:399] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-etcd-serving-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076362 8631 reconciler.go:399] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b289996-b213-413c-a468-f51e7e3eb0e4-serving-cert\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076371 8631 reconciler.go:399] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-audit\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076381 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-ocp-branding-template\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076390 8631 reconciler.go:399] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-serving-cert\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076408 8631 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-config\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076416 8631 reconciler.go:399] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/673a603f-a83d-437b-bf5e-7a95a63a17fa-serving-cert\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076425 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-service-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076434 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-sspj2\" (UniqueName: \"kubernetes.io/projected/02502f0c-09a2-4a94-b4f4-92a060050951-kube-api-access-sspj2\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076443 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-session\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076452 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-tcctn\" (UniqueName: \"kubernetes.io/projected/14819588-d3b0-492e-8c78-4bbee02f2eca-kube-api-access-tcctn\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076461 8631 reconciler.go:399] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/673a603f-a83d-437b-bf5e-7a95a63a17fa-client-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076471 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-provider-selection\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076481 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-trusted-ca-bundle\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076491 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-error\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076500 8631 reconciler.go:399] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-encryption-config\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076510 8631 reconciler.go:399] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02502f0c-09a2-4a94-b4f4-92a060050951-audit-policies\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076518 8631 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14819588-d3b0-492e-8c78-4bbee02f2eca-config\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076527 8631 reconciler.go:399] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14819588-d3b0-492e-8c78-4bbee02f2eca-encryption-config\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076534 8631 reconciler.go:399] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-etcd-serving-ca\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076544 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-user-template-login\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076553 8631 reconciler.go:399] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-proxy-ca-bundles\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076561 8631 reconciler.go:399] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471-serving-cert\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076571 8631 reconciler.go:399] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02502f0c-09a2-4a94-b4f4-92a060050951-v4-0-config-system-router-certs\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076580 8631 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b289996-b213-413c-a468-f51e7e3eb0e4-config\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076573 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076659 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd7e23a1-2620-491c-a453-b41708d2e0d7-trusted-ca\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076739 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409cdcf0-1eab-47ad-9389-ad5809e748ff-ovn-ca\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076756 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovnkube-config\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.076961 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-tls\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.077221 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ff6a907c-8dc5-4524-b928-d97ba7b430c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.077251 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-node-bootstrap-token\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.077670 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc516524-2ee1-45e5-8b33-0266acf098d1-metrics-certs\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.077756 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-master-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-ovn-master-metrics-cert\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.077912 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-proxy-tls\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.078437 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7b22547-215c-4758-8154-a3bfc577ec12-certs\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.078565 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-metrics-tls\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.078693 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/612bc2d6-261c-4dc3-9902-489a4589ec9b-cookie-secret\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.083082 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnhw\" (UniqueName: \"kubernetes.io/projected/0fdadbfc-e471-4e10-97e8-80b8e881aec6-kube-api-access-9gnhw\") pod \"network-check-target-qs9w4\" (UID: \"0fdadbfc-e471-4e10-97e8-80b8e881aec6\") " pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope. -- Subject: Unit crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.086433 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwhz\" (UniqueName: \"kubernetes.io/projected/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14-kube-api-access-mbwhz\") pod \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab\" (UID: \"7cca1a4c-e8cc-4938-9e14-a4d8d979ad14\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.087633 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf9abfd8-f6ab-41d0-9984-1c374f00d734-kube-api-access\") pod \"revision-pruner-8-hub-master-0.workload.bos2.lab\" (UID: \"bf9abfd8-f6ab-41d0-9984-1c374f00d734\") " pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.087783 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5lk\" (UniqueName: \"kubernetes.io/projected/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b-kube-api-access-fg5lk\") pod \"etcd-guard-hub-master-0.workload.bos2.lab\" (UID: \"16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b\") " pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.088170 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-jktgq\" (UniqueName: \"kubernetes.io/projected/16d2550a-6aa8-453b-9d72-f50466ef11b2-kube-api-access-jktgq\") pod \"tuned-4pckj\" (UID: \"16d2550a-6aa8-453b-9d72-f50466ef11b2\") " pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.089055 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhsm\" (UniqueName: \"kubernetes.io/projected/94cb9be9-32f4-413c-9fdf-a6e9307ff410-kube-api-access-lmhsm\") pod \"multus-additional-cni-plugins-7ks6h\" (UID: \"94cb9be9-32f4-413c-9fdf-a6e9307ff410\") " pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.089257 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcqb\" (UniqueName: \"kubernetes.io/projected/839425af-4ad1-4627-b58f-20197745cb4a-kube-api-access-xpcqb\") pod \"node-resolver-9bshd\" (UID: \"839425af-4ad1-4627-b58f-20197745cb4a\") " pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.089353 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ctm\" (UniqueName: \"kubernetes.io/projected/a88a1018-cc7c-4bd1-b3d2-0d960b53459c-kube-api-access-s4ctm\") pod \"ovnkube-master-fld8m\" (UID: \"a88a1018-cc7c-4bd1-b3d2-0d960b53459c\") " pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.090090 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50-kube-api-access\") pod \"revision-pruner-9-hub-master-0.workload.bos2.lab\" (UID: \"2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50\") " pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.090287 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access\") pod \"installer-10-hub-master-0.workload.bos2.lab\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a. -- Subject: Unit crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.105665 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d6bg\" (UniqueName: \"kubernetes.io/projected/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e-kube-api-access-5d6bg\") pod \"dns-default-srzv5\" (UID: \"3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e\") " pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07. -- Subject: Unit crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.125670 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7nz\" (UniqueName: \"kubernetes.io/projected/2284ac10-60cf-4768-bd24-3ea63b730ce6-kube-api-access-zk7nz\") pod \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab\" (UID: \"2284ac10-60cf-4768-bd24-3ea63b730ce6\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.145351 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-svqk6\" (UniqueName: \"kubernetes.io/projected/409cdcf0-1eab-47ad-9389-ad5809e748ff-kube-api-access-svqk6\") pod \"ovnkube-node-897lw\" (UID: \"409cdcf0-1eab-47ad-9389-ad5809e748ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.158247824Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=eb48a803-0b1e-43d5-b355-326719756841 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.160483 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77321459d336b7d15305c9b9a83e4081.slice/crio-48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6.scope WatchSource:0}: Error finding container 48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6: Status 404 returned error can't find the container with id 48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.162579008Z" level=info msg="Ran pod sandbox 48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 with infra container: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/POD" id=eb48a803-0b1e-43d5-b355-326719756841 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.163299681Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=1e6f2b26-1c2c-4826-80ca-056043d78b5b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.163666942Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=71979e85-2849-46ab-a4fc-50a9cd83da6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.163730008Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1 not found" id=1e6f2b26-1c2c-4826-80ca-056043d78b5b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.163988728Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=25832e30-4b35-4298-b1d5-0429f5c3c946 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.164950 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vk9j\" (UniqueName: \"kubernetes.io/projected/d7b22547-215c-4758-8154-a3bfc577ec12-kube-api-access-6vk9j\") pod \"machine-config-server-vpsv9\" (UID: \"d7b22547-215c-4758-8154-a3bfc577ec12\") " pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.165870 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38eebeadc7ddc4d42d1de9a5e4ac69f1.slice/crio-90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea.scope WatchSource:0}: Error finding container 90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea: Status 404 returned error can't find the container with id 90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.166260287Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.167724753Z" level=info msg="Ran pod sandbox 90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea with infra container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/POD" id=71979e85-2849-46ab-a4fc-50a9cd83da6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.168319560Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=6eab97d6-ae3b-4eee-90b1-696b97082b09 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.168426014Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020 not found" id=6eab97d6-ae3b-4eee-90b1-696b97082b09 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.168647156Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=017ef411-b5fa-46e0-938e-6f133b5d5064 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.170476411Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.185339 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.185622 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-967j5\" (UniqueName: \"kubernetes.io/projected/b6c2cdc5-967e-4062-b6e6-f6cf372cc21c-kube-api-access-967j5\") pod \"multus-cdt6c\" (UID: \"b6c2cdc5-967e-4062-b6e6-f6cf372cc21c\") " pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.185619078Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.185652246Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.191882 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.192218440Z" level=info msg="Running pod sandbox: openshift-multus/multus-additional-cni-plugins-7ks6h/POD" id=055c03d6-a741-46e8-8879-176962cb25c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.192249640Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.192544570Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/de00ae5b-2a59-4ecb-8575-5898873367d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.192564901Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.194313385Z" level=info msg="Created container 6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/render-config-keepalived" id=7dc36234-54f4-40a7-a57c-e1d2acfdc196 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.194661881Z" level=info msg="Created container ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/verify-api-int-resolvable" id=a84546f0-a974-44e0-a69e-bb027eaa1bc2 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.194820362Z" level=info msg="Starting container: 6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a" id=15324cfc-1bc5-4791-b711-c2657dfc287d name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.194846123Z" level=info msg="Starting container: ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07" id=c12c705c-1c4c-4e51-a291-5a6082ba3d45 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.196625255Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=64d54f4a-8bb6-435d-ab59-a7b2a2bf8935 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.196949147Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=055c03d6-a741-46e8-8879-176962cb25c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.198796 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eb8d73fcd73cda1a9e34d91bb51e339.slice/crio-8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8.scope WatchSource:0}: Error finding container 8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8: Status 404 returned error can't find the container with id 8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.198906 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94cb9be9_32f4_413c_9fdf_a6e9307ff410.slice/crio-fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b.scope WatchSource:0}: Error finding container fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b: Status 404 returned error can't find the container with id fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.200682237Z" level=info msg="Ran pod sandbox 8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8 with infra container: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/POD" id=64d54f4a-8bb6-435d-ab59-a7b2a2bf8935 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201197094Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=49e31a03-130e-4799-b5d6-e0a05913cfbb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201250845Z" level=info msg="Started container" PID=8807 containerID=6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a description=openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/render-config-keepalived id=15324cfc-1bc5-4791-b711-c2657dfc287d name=/runtime.v1.RuntimeService/StartContainer sandboxID=46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201344611Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=49e31a03-130e-4799-b5d6-e0a05913cfbb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201668193Z" level=info msg="Ran pod sandbox fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b with infra container: openshift-multus/multus-additional-cni-plugins-7ks6h/POD" id=055c03d6-a741-46e8-8879-176962cb25c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201813127Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=5e1fbada-2818-4aa7-b185-e5d7e1c75a5a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.201899965Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5e1fbada-2818-4aa7-b185-e5d7e1c75a5a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.202072955Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" id=b05fd989-9726-4e7e-b691-e6d2d18b1e7e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.202170772Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d not found" id=b05fd989-9726-4e7e-b691-e6d2d18b1e7e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.202362090Z" level=info msg="Creating container: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/render-config-coredns" id=2d9ca773-0388-4024-9174-412d07daa283 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.202423576Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.202426191Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" id=30bcf2a3-3e8d-4e3b-984e-43dcd02c0325 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.203951613Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.206229 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fggn\" (UniqueName: \"kubernetes.io/projected/612bc2d6-261c-4dc3-9902-489a4589ec9b-kube-api-access-2fggn\") pod \"machine-config-daemon-jkffc\" (UID: \"612bc2d6-261c-4dc3-9902-489a4589ec9b\") " pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.210292 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9bshd" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.210508024Z" level=info msg="Running pod sandbox: openshift-dns/node-resolver-9bshd/POD" id=e50cc584-ffe2-4fb1-920d-6f2753b5a0ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.210537782Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.212073738Z" level=info msg="Started container" PID=8813 containerID=ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07 description=openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/verify-api-int-resolvable id=c12c705c-1c4c-4e51-a291-5a6082ba3d45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.214023151Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=e50cc584-ffe2-4fb1-920d-6f2753b5a0ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.216335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.216487 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod839425af_4ad1_4627_b58f_20197745cb4a.slice/crio-8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1.scope WatchSource:0}: Error finding container 8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1: Status 404 returned error can't find the container with id 8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.216563936Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.216594485Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.218373757Z" level=info msg="Ran pod sandbox 8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1 with infra container: openshift-dns/node-resolver-9bshd/POD" id=e50cc584-ffe2-4fb1-920d-6f2753b5a0ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.219084437Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b" id=43a62e9a-cb9f-43e7-ad81-90e41aced6fd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.219223268Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b not found" id=43a62e9a-cb9f-43e7-ad81-90e41aced6fd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.219436978Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b" id=8869046f-1591-413c-8215-e8401d4317fa name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.221670683Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.223578 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vpsv9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.223739220Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-server-vpsv9/POD" id=da237776-9e7a-49cf-89fc-b9c1e0f8d254 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.223765408Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.223899958Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/8e70d917-e8b5-492e-b5a2-c4744138f447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.223920487Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.225911 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4kg\" (UniqueName: \"kubernetes.io/projected/fc516524-2ee1-45e5-8b33-0266acf098d1-kube-api-access-db4kg\") pod \"network-metrics-daemon-dzwx9\" (UID: \"fc516524-2ee1-45e5-8b33-0266acf098d1\") " pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.227131377Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=da237776-9e7a-49cf-89fc-b9c1e0f8d254 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.230671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.231036162Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.231082369Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.231216 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b22547_215c_4758_8154_a3bfc577ec12.slice/crio-47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd.scope WatchSource:0}: Error finding container 47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd: Status 404 returned error can't find the container with id 47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.233814761Z" level=info msg="Ran pod sandbox 47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd with infra container: openshift-machine-config-operator/machine-config-server-vpsv9/POD" id=da237776-9e7a-49cf-89fc-b9c1e0f8d254 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.234362532Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=f632b15b-7f8d-41be-a1b4-fb5dca67372c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.234507743Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4 not found" id=f632b15b-7f8d-41be-a1b4-fb5dca67372c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.234914328Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=8157d0bd-d7eb-405a-ba8f-1d3b55bf0da7 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope: Consumed 30ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.237525685Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope: Consumed 43ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.238723534Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/a7a7cc4b-893e-4643-a4ed-f76ed127bae8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.238762567Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.241250 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-4pckj" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.241518350Z" level=info msg="Running pod sandbox: openshift-cluster-node-tuning-operator/tuned-4pckj/POD" id=e6e57028-7712-4e66-8beb-fe1982d03eb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.241553559Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.245872314Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=e6e57028-7712-4e66-8beb-fe1982d03eb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.247447 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d2550a_6aa8_453b_9d72_f50466ef11b2.slice/crio-e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f.scope WatchSource:0}: Error finding container e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f: Status 404 returned error can't find the container with id e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.249856722Z" level=info msg="Ran pod sandbox e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f with infra container: openshift-cluster-node-tuning-operator/tuned-4pckj/POD" id=e6e57028-7712-4e66-8beb-fe1982d03eb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.250315917Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679" id=3dcf654d-b5d1-442e-921d-cb3f2d5b3378 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.250426712Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679 not found" id=3dcf654d-b5d1-442e-921d-cb3f2d5b3378 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.250678102Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679" id=9868f4a4-43b6-4ea0-b656-8779ee6b7916 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.252595460Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope. -- Subject: Unit crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.256456 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.256688954Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.256720560Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.265948620Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/02773faa-a579-4c16-9438-c3e56af30922 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.265973632Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.266838 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldz65\" (UniqueName: \"kubernetes.io/projected/16c1efa7-495c-45d5-b9c1-00d078cb4114-kube-api-access-ldz65\") pod \"kube-apiserver-guard-hub-master-0.workload.bos2.lab\" (UID: \"16c1efa7-495c-45d5-b9c1-00d078cb4114\") " pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.268832 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.269162710Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.269189449Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.275950239Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c097c349-f4ba-4cf6-8497-d044db4d9cd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.275970397Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838. -- Subject: Unit crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.286917 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-s65s6\" (UniqueName: \"kubernetes.io/projected/ff6a907c-8dc5-4524-b928-d97ba7b430c3-kube-api-access-s65s6\") pod \"node-exporter-pbh26\" (UID: \"ff6a907c-8dc5-4524-b928-d97ba7b430c3\") " pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.289617 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ssqt\" (UniqueName: \"kubernetes.io/projected/dd7e23a1-2620-491c-a453-b41708d2e0d7-kube-api-access-2ssqt\") pod \"ironic-proxy-nhh2z\" (UID: \"dd7e23a1-2620-491c-a453-b41708d2e0d7\") " pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.292485 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.292821776Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-master-fld8m/POD" id=747cf68d-d16f-4940-995e-0f35bdc56660 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.292854614Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.296439948Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=747cf68d-d16f-4940-995e-0f35bdc56660 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.298144 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88a1018_cc7c_4bd1_b3d2_0d960b53459c.slice/crio-f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7.scope WatchSource:0}: Error finding container f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7: Status 404 returned error can't find the container with id f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.300321155Z" level=info msg="Ran pod sandbox f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 with infra container: openshift-ovn-kubernetes/ovnkube-master-fld8m/POD" id=747cf68d-d16f-4940-995e-0f35bdc56660 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.300952827Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=4fc4a9c4-0483-42f4-a045-edecc8fea9e2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.301166485Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf not found" id=4fc4a9c4-0483-42f4-a045-edecc8fea9e2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.301429760Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=9411990f-bcd3-4118-81e9-5898210b9edb name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.302496310Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.303826 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/ironic-proxy-nhh2z" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.304022767Z" level=info msg="Running pod sandbox: openshift-machine-api/ironic-proxy-nhh2z/POD" id=f187a9ec-64c5-4380-b275-be9a537b6ced name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.304055329Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.306000 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa-kube-api-access\") pod \"revision-pruner-10-hub-master-0.workload.bos2.lab\" (UID: \"6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa\") " pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.308719140Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=f187a9ec-64c5-4380-b275-be9a537b6ced name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.309583 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.309783694Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.309807150Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.310970 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd7e23a1_2620_491c_a453_b41708d2e0d7.slice/crio-3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563.scope WatchSource:0}: Error finding container 3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563: Status 404 returned error can't find the container with id 3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.311004539Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.312743000Z" level=info msg="Ran pod sandbox 3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563 with infra container: openshift-machine-api/ironic-proxy-nhh2z/POD" id=f187a9ec-64c5-4380-b275-be9a537b6ced name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.313307679Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b" id=e05e4378-6891-4270-a590-2c68707116b4 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.313405706Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b not found" id=e05e4378-6891-4270-a590-2c68707116b4 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.313650339Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b" id=bdb11cc7-f1e3-4517-85b5-a68536cd15b3 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.315695701Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.316810366Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/280f54b0-a390-4ad3-b63a-99c83abd7c76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.316826779Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.325712 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htrb\" (UniqueName: \"kubernetes.io/projected/5ced4aec-1711-4abf-825a-c546047148b7-kube-api-access-5htrb\") pod \"node-ca-2j9w6\" (UID: \"5ced4aec-1711-4abf-825a-c546047148b7\") " pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.331587734Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.337637 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.337841706Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-node-897lw/POD" id=fd938674-eacd-473c-b2b0-0dce1a396224 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.337869955Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.341344384Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=fd938674-eacd-473c-b2b0-0dce1a396224 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.343630 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409cdcf0_1eab_47ad_9389_ad5809e748ff.slice/crio-39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b.scope WatchSource:0}: Error finding container 39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b: Status 404 returned error can't find the container with id 39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.344505 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tsmq\" (UniqueName: \"kubernetes.io/projected/0dd28320-8b9c-4b86-baca-8c1d561a962c-kube-api-access-8tsmq\") pod \"ingress-canary-7v8f9\" (UID: \"0dd28320-8b9c-4b86-baca-8c1d561a962c\") " pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.345028853Z" level=info msg="Ran pod sandbox 39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b with infra container: openshift-ovn-kubernetes/ovnkube-node-897lw/POD" id=fd938674-eacd-473c-b2b0-0dce1a396224 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.345417357Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=69dbf373-c7f7-4318-955f-42961b38c779 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.345511491Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf not found" id=69dbf373-c7f7-4318-955f-42961b38c779 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.345739200Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=02a62e59-c8a6-4ed3-ab5c-6220df00f1d7 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.347651005Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.349096 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.349282337Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.349301863Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.355956574Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/27823c6e-462f-4dbc-898d-c1a8eb118472 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.355975145Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.358160 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.358582271Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.358605654Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.365125769Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d5e84b71-e25e-4483-af5e-644be83e25d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.365145343Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.368304 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.368509614Z" level=info msg="Created container e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/render-config-coredns" id=2d9ca773-0388-4024-9174-412d07daa283 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.368647211Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.368675765Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.368746881Z" level=info msg="Starting container: e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838" id=c3064ca4-d797-48f2-a527-c19a1ca91f3f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.375402063Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/a7e0c2fc-8546-4204-bc65-61a2a91b42a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.375422051Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.385856658Z" level=info msg="Started container" PID=8989 containerID=e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838 description=openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/render-config-coredns id=c3064ca4-d797-48f2-a527-c19a1ca91f3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.398387 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-apiserver/apiserver-746c4bf98c-r7nkz] Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.401570 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-apiserver/apiserver-746c4bf98c-r7nkz] Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope: Consumed 39ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope: Consumed 38ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.456993882Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.459016 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-oauth-apiserver/apiserver-86c7cf6467-v5ckj] Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.461734 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-oauth-apiserver/apiserver-86c7cf6467-v5ckj] Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.474826304Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.477029 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cdt6c" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.477076233Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.477290477Z" level=info msg="Running pod sandbox: openshift-multus/multus-cdt6c/POD" id=042272fb-901e-4032-9124-d34ddfedc0cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.477321466Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.481679375Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=042272fb-901e-4032-9124-d34ddfedc0cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.498083 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.501636955Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.501684638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.503052954Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.503881 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.504117732Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-daemon-jkffc/POD" id=b2809fc9-fde2-424a-b82b-28681c212307 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.504152571Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.504475 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c2cdc5_967e_4062_b6e6_f6cf372cc21c.slice/crio-cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8.scope WatchSource:0}: Error finding container cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8: Status 404 returned error can't find the container with id cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.506250509Z" level=info msg="Ran pod sandbox cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 with infra container: openshift-multus/multus-cdt6c/POD" id=042272fb-901e-4032-9124-d34ddfedc0cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.506978273Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=957a8ff5-65e1-45f4-82d9-bacc46696fc8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.508572920Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0 not found" id=957a8ff5-65e1-45f4-82d9-bacc46696fc8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.508835132Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=cc36ce70-1d78-4c70-80ed-9e7ecd0d20b8 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.510831250Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=b2809fc9-fde2-424a-b82b-28681c212307 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.511213082Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.512565692Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/087347b4-6e39-42b1-8aba-1e49de29f9da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.512586913Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.512839 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612bc2d6_261c_4dc3_9902_489a4589ec9b.slice/crio-6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19.scope WatchSource:0}: Error finding container 6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19: Status 404 returned error can't find the container with id 6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.514718544Z" level=info msg="Ran pod sandbox 6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19 with infra container: openshift-machine-config-operator/machine-config-daemon-jkffc/POD" id=b2809fc9-fde2-424a-b82b-28681c212307 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.515267968Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=23f9aa09-dc80-40e5-bcc5-0f97a3d2102a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.515365899Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4 not found" id=23f9aa09-dc80-40e5-bcc5-0f97a3d2102a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.515519 8631 kuberuntime_manager.go:862] container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4,Command:[/usr/bin/machine-config-daemon],Args:[start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2fggn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod machine-config-daemon-jkffc_openshift-machine-config-operator(612bc2d6-261c-4dc3-9902-489a4589ec9b): ErrImagePull: pull QPS exceeded Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.515679783Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=489ca859-2f29-4d98-ac71-f23614ff29d6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.515780892Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3 not found" id=489ca859-2f29-4d98-ac71-f23614ff29d6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.515885 8631 kuberuntime_manager.go:862] container &Container{Name:oauth-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3,Command:[],Args:[--https-address=:9001 --provider=openshift --openshift-service-account=machine-config-daemon --upstream=http://127.0.0.1:8797 --tls-cert=/etc/tls/private/tls.crt --tls-key=/etc/tls/private/tls.key --cookie-secret-file=/etc/tls/cookie-secret/cookie-secret --openshift-sar={"resource": "namespaces", "verb": "get"} --openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cookie-secret,ReadOnly:false,MountPath:/etc/tls/cookie-secret,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2fggn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod machine-config-daemon-jkffc_openshift-machine-config-operator(612bc2d6-261c-4dc3-9902-489a4589ec9b): ErrImagePull: pull QPS exceeded Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.516989 8631 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" podUID=612bc2d6-261c-4dc3-9902-489a4589ec9b Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.518267 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-authentication/oauth-openshift-868d5f6bf8-ttp4c] Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.520567 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-authentication/oauth-openshift-868d5f6bf8-ttp4c] Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.532631007Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.534143051Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope: Consumed 212ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope has successfully entered the 'dead' state. Jan 23 16:15:29 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07.scope completed and consumed the indicated resources. Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.550024 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-pbh26" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.550228637Z" level=info msg="Running pod sandbox: openshift-monitoring/node-exporter-pbh26/POD" id=aaa34698-de81-4055-84d3-65239f945423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.550261657Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.554593150Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=aaa34698-de81-4055-84d3-65239f945423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.556874 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff6a907c_8dc5_4524_b928_d97ba7b430c3.slice/crio-c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9.scope WatchSource:0}: Error finding container c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9: Status 404 returned error can't find the container with id c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.558568605Z" level=info msg="Ran pod sandbox c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9 with infra container: openshift-monitoring/node-exporter-pbh26/POD" id=aaa34698-de81-4055-84d3-65239f945423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.559071303Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=f26b6f26-9ec7-4875-b7ea-e60d785971fe name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.559188648Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac not found" id=f26b6f26-9ec7-4875-b7ea-e60d785971fe name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.559315 8631 kuberuntime_manager.go:862] init container &Container{Name:init-textfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac,Command:[/bin/sh -c [[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;],Args:[],WorkingDir:/var/node_exporter/textfile,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMPDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{1 -3} {} 1m DecimalSI},memory: {{1048576 0} {} 1Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:node-exporter-textfile,ReadOnly:false,MountPath:/var/node_exporter/textfile,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:node-exporter-wtmp,ReadOnly:true,MountPath:/var/log/wtmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-s65s6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod node-exporter-pbh26_openshift-monitoring(ff6a907c-8dc5-4524-b928-d97ba7b430c3): ErrImagePull: pull QPS exceeded Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.559340 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-textfile\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-monitoring/node-exporter-pbh26" podUID=ff6a907c-8dc5-4524-b928-d97ba7b430c3 Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.563763 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.563966806Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.563995482Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.571294450Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/8448acfc-4457-48bb-a909-a4c132ec8212 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.571314789Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.573305 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.573503266Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.573527743Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.574624260Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.580602835Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e5601c10-aed1-4911-b3b4-1c538160a0ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.580621983Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.582295 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2j9w6" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.582472684Z" level=info msg="Running pod sandbox: openshift-image-registry/node-ca-2j9w6/POD" id=ca72920c-23f3-4b4c-8f3e-6ed817312209 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.582493322Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.586517602Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=ca72920c-23f3-4b4c-8f3e-6ed817312209 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:15:29.588553 8631 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ced4aec_1711_4abf_825a_c546047148b7.slice/crio-09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2.scope WatchSource:0}: Error finding container 09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2: Status 404 returned error can't find the container with id 09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.592886042Z" level=info msg="Ran pod sandbox 09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2 with infra container: openshift-image-registry/node-ca-2j9w6/POD" id=ca72920c-23f3-4b4c-8f3e-6ed817312209 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.594033711Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=5fbdcf5e-6b05-4eca-b268-982e457dd09f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.594149050Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a not found" id=5fbdcf5e-6b05-4eca-b268-982e457dd09f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.594282 8631 kuberuntime_manager.go:862] container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: while [ true ]; Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: do Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: for f in $(ls /tmp/serviceca); do Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: echo $f Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: ca_file_path="/tmp/serviceca/${f}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [ -e "${reg_dir_path}" ]; then Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: else Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: mkdir $reg_dir_path Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: cp $ca_file_path $reg_dir_path/ca.crt Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: done Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: for d in $(ls /etc/docker/certs.d); do Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: echo $d Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: reg_conf_path="/tmp/serviceca/${dp}" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [ ! -e "${reg_conf_path}" ]; then Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: rm -rf /etc/docker/certs.d/$d Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: done Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: sleep 60 & wait ${!} Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: done Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5htrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod node-ca-2j9w6_openshift-image-registry(5ced4aec-1711-4abf-825a-c546047148b7): ErrImagePull: pull QPS exceeded Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:29.594304 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-image-registry/node-ca-2j9w6" podUID=5ced4aec-1711-4abf-825a-c546047148b7 Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.611993210Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.680559105Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:29.759792519Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0\"" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.999071 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=02502f0c-09a2-4a94-b4f4-92a060050951 path="/var/lib/kubelet/pods/02502f0c-09a2-4a94-b4f4-92a060050951/volumes" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.999323 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=14819588-d3b0-492e-8c78-4bbee02f2eca path="/var/lib/kubelet/pods/14819588-d3b0-492e-8c78-4bbee02f2eca/volumes" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.999522 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c8f5ce0b-5be2-49aa-ae7a-ddd7de103471 path="/var/lib/kubelet/pods/c8f5ce0b-5be2-49aa-ae7a-ddd7de103471/volumes" Jan 23 16:15:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:29.999623 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.000495 8631 generic.go:296] "Generic (PLEG): container finished" podID=841c556dbc6afe45e33a42a9dd8b5492 containerID="6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a" exitCode=0 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.000592 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" event=&{ID:841c556dbc6afe45e33a42a9dd8b5492 Type:ContainerDied Data:6750eca8cdc264586ad3ed8e9e8f5c30c3a8a5c0d92d7f12bd910799741baa7a} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.000627 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" event=&{ID:841c556dbc6afe45e33a42a9dd8b5492 Type:ContainerStarted Data:46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.000928 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2j9w6" event=&{ID:5ced4aec-1711-4abf-825a-c546047148b7 Type:ContainerStarted Data:09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.000929006Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb" id=21741edc-3d6b-4a1b-9f40-754b0879e606 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.001064688Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb not found" id=21741edc-3d6b-4a1b-9f40-754b0879e606 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.001401438Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb" id=97ed1404-78cf-4e14-b4ce-a50b2fece779 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.001481608Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=e3eaa688-e594-4e33-9d80-8e23410e8a77 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.001590379Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a not found" id=e3eaa688-e594-4e33-9d80-8e23410e8a77 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:30.001902 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a\\\"\"" pod="openshift-image-registry/node-ca-2j9w6" podUID=5ced4aec-1711-4abf-825a-c546047148b7 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.001910 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" event=&{ID:612bc2d6-261c-4dc3-9902-489a4589ec9b Type:ContainerStarted Data:6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.002333895Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=22fb9301-95ea-4920-9556-d6ec4e2c430c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.002458448Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4 not found" id=22fb9301-95ea-4920-9556-d6ec4e2c430c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.002746787Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=ac94f007-0b06-43d8-8aa3-e3b078d6e42d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.002850939Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3 not found" id=ac94f007-0b06-43d8-8aa3-e3b078d6e42d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:30.002962 8631 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4\\\"\", failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3\\\"\"]" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" podUID=612bc2d6-261c-4dc3-9902-489a4589ec9b Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.003106 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.003600595Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb\"" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.004030 8631 generic.go:296] "Generic (PLEG): container finished" podID=5eb8d73fcd73cda1a9e34d91bb51e339 containerID="e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838" exitCode=0 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.004057 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" event=&{ID:5eb8d73fcd73cda1a9e34d91bb51e339 Type:ContainerDied Data:e7279ada444c8d6e3367f09cd3904c727d58e70dd27502f886c0d0fb42dd3838} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.004072 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" event=&{ID:5eb8d73fcd73cda1a9e34d91bb51e339 Type:ContainerStarted Data:8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.004400811Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" id=3571eac3-bdb6-42d6-8fa9-e0a37d0a5c11 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.004504309Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f not found" id=3571eac3-bdb6-42d6-8fa9-e0a37d0a5c11 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.005025349Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" id=ba01d32d-ee94-49cf-aeac-cd237e7aec5c name=/runtime.v1.ImageService/PullImage Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.005119 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.005606 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pbh26" event=&{ID:ff6a907c-8dc5-4524-b928-d97ba7b430c3 Type:ContainerStarted Data:c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.005964292Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=851eee06-db8c-4716-8208-621d4762ac00 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.006069434Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac not found" id=851eee06-db8c-4716-8208-621d4762ac00 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:30.006240 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-textfile\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac\\\"\"" pod="openshift-monitoring/node-exporter-pbh26" podUID=ff6a907c-8dc5-4524-b928-d97ba7b430c3 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.006640 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vpsv9" event=&{ID:d7b22547-215c-4758-8154-a3bfc577ec12 Type:ContainerStarted Data:47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.006690495Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f\"" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.007651 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerStarted Data:fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.008353 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/ironic-proxy-nhh2z" event=&{ID:dd7e23a1-2620-491c-a453-b41708d2e0d7 Type:ContainerStarted Data:3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.009174 8631 generic.go:296] "Generic (PLEG): container finished" podID=04f654eda4f14a4bee64377a5c765343 containerID="ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07" exitCode=0 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.009228 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerDied Data:ef931fb756d8f25b468c3632625e21c97e118a0f4be70d6415465a0f7b7d6b07} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.009251 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerStarted Data:1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.009573274Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=85920175-fcb5-490f-ad92-01772381fab3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.009678242Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2 not found" id=85920175-fcb5-490f-ad92-01772381fab3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:30.009805 8631 kuberuntime_manager.go:862] container &Container{Name:haproxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2,Command:[/bin/bash -c #/bin/bash Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: verify_old_haproxy_ps_being_deleted() Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: { Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: local prev_pids Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: prev_pids="$1" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: sleep $OLD_HAPROXY_PS_FORCE_DEL_TIMEOUT Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: cur_pids=$(pidof haproxy) Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: for val in $prev_pids; do Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [[ $cur_pids =~ (^|[[:space:]])"$val"($|[[:space:]]) ]] ; then Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: kill $val Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: done Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: } Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: reload_haproxy() Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: { Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: old_pids=$(pidof haproxy) Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [ -n "$old_pids" ]; then Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/run/haproxy.pid -x /var/lib/haproxy/run/haproxy.sock -sf $old_pids & Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: #There seems to be some cases where HAProxy doesn't drain properly. Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: #To handle that case, SIGTERM signal being sent to old HAProxy processes which haven't terminated. Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: verify_old_haproxy_ps_being_deleted "$old_pids" & Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: else Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/run/haproxy.pid & Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: } Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: msg_handler() Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: { Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: while read -r line; do Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: echo "The client send: $line" >&2 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: # currently only 'reload' msg is supported Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [ "$line" = reload ]; then Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: reload_haproxy Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: done Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: } Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: set -ex Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: declare -r haproxy_sock="/var/run/haproxy/haproxy-master.sock" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: declare -r haproxy_log_sock="/var/run/haproxy/haproxy-log.sock" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: export -f msg_handler Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: export -f reload_haproxy Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: export -f verify_old_haproxy_ps_being_deleted Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: rm -f "$haproxy_sock" "$haproxy_log_sock" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: socat UNIX-RECV:${haproxy_log_sock} STDOUT & Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: if [ -s "/etc/haproxy/haproxy.cfg" ]; then Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/run/haproxy.pid & Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: fi Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: socat UNIX-LISTEN:${haproxy_sock},fork system:'bash -c msg_handler' Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OLD_HAPROXY_PS_FORCE_DEL_TIMEOUT,Value:120,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{209715200 0} {} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:conf-dir,ReadOnly:false,MountPath:/etc/haproxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run-dir,ReadOnly:false,MountPath:/var/run/haproxy,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/haproxy_ready,Port:{0 9444 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:50,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod haproxy-hub-master-0.workload.bos2.lab_openshift-kni-infra(04f654eda4f14a4bee64377a5c765343): ErrImagePull: pull QPS exceeded Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.009951 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" event=&{ID:77321459d336b7d15305c9b9a83e4081 Type:ContainerStarted Data:48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.009901676Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=a2514b56-041f-429d-ac68-7c7790dacc3e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.009990945Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a2514b56-041f-429d-ac68-7c7790dacc3e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.010311792Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=dfac4a9b-a645-4572-b499-6260f9c91504 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.010321 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3} Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.010428128Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dfac4a9b-a645-4572-b499-6260f9c91504 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.010792252Z" level=info msg="Creating container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=1b71daf9-ca17-4115-9baf-ba304467e95c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.010851042Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.010950 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9bshd" event=&{ID:839425af-4ad1-4627-b58f-20197745cb4a Type:ContainerStarted Data:8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.011347 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-4pckj" event=&{ID:16d2550a-6aa8-453b-9d72-f50466ef11b2 Type:ContainerStarted Data:e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.011681 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerStarted Data:90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.012392 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" event=&{ID:b8e918bfaafef0fc7d13026942c43171 Type:ContainerStarted Data:f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462} Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.061913 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-route-controller-manager/route-controller-manager-5fdd49db4f-ftmvb] Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.063271 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-route-controller-manager/route-controller-manager-5fdd49db4f-ftmvb] Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.069969 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-controller-manager/controller-manager-876b6ffdf-hrzw7] Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.071755 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-controller-manager/controller-manager-876b6ffdf-hrzw7] Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.139578 8631 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=673a603f-a83d-437b-bf5e-7a95a63a17fa Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope. -- Subject: Unit crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:30.140783 8631 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=4b289996-b213-413c-a468-f51e7e3eb0e4 Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02. -- Subject: Unit crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.226602478Z" level=info msg="Created container 51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=1b71daf9-ca17-4115-9baf-ba304467e95c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.227003956Z" level=info msg="Starting container: 51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02" id=81f1f0c2-cd99-472b-a9d3-4692c43fa3db name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.244868715Z" level=info msg="Started container" PID=9180 containerID=51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02 description=openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor id=81f1f0c2-cd99-472b-a9d3-4692c43fa3db name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:30.264169 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"haproxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" podUID=04f654eda4f14a4bee64377a5c765343 Jan 23 16:15:30 hub-master-0.workload.bos2.lab conmon[9168]: conmon 51645b444a8f7e79c603 : container 9180 exited with status 1 Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has successfully entered the 'dead' state. Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope completed and consumed the indicated resources. Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope has successfully entered the 'dead' state. Jan 23 16:15:30 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope: Consumed 38ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02.scope completed and consumed the indicated resources. Jan 23 16:15:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:30.532371901Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb\"" Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.005463707Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f\"" Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.015193 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_haproxy-hub-master-0.workload.bos2.lab_04f654eda4f14a4bee64377a5c765343/haproxy-monitor/3.log" Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.015435 8631 generic.go:296] "Generic (PLEG): container finished" podID=04f654eda4f14a4bee64377a5c765343 containerID="51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02" exitCode=1 Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.015456 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerDied Data:51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02} Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.015717 8631 scope.go:115] "RemoveContainer" containerID="51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02" Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.016048247Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=3fb5865d-c2b5-4977-ba1e-7a5f66587941 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.016158402Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2 not found" id=3fb5865d-c2b5-4977-ba1e-7a5f66587941 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.016596685Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=7b039599-fedf-4ced-b665-d897cca00642 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.016703076Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7b039599-fedf-4ced-b665-d897cca00642 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.017273960Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=60c22030-4471-4fc5-b354-8facd7c696b6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.017411133Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=60c22030-4471-4fc5-b354-8facd7c696b6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.017879117Z" level=info msg="Creating container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=76bcda6b-c8d0-4bfc-a986-839d53f8d01c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.017951327Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope. -- Subject: Unit crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181. -- Subject: Unit crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.178768758Z" level=info msg="Created container b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=76bcda6b-c8d0-4bfc-a986-839d53f8d01c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.179094889Z" level=info msg="Starting container: b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181" id=9c7e74f2-f9e2-4a0e-b88e-9b1f5c5412e6 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:31.196498420Z" level=info msg="Started container" PID=9252 containerID=b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181 description=openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor id=9c7e74f2-f9e2-4a0e-b88e-9b1f5c5412e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:31.215075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"haproxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2\\\"\"" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" podUID=04f654eda4f14a4bee64377a5c765343 Jan 23 16:15:31 hub-master-0.workload.bos2.lab conmon[9240]: conmon b94ddf4c5c92785eae78 : container 9252 exited with status 1 Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has successfully entered the 'dead' state. Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope: Consumed 48ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope completed and consumed the indicated resources. Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope has successfully entered the 'dead' state. Jan 23 16:15:31 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope: Consumed 39ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181.scope completed and consumed the indicated resources. Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.997724 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4b289996-b213-413c-a468-f51e7e3eb0e4 path="/var/lib/kubelet/pods/4b289996-b213-413c-a468-f51e7e3eb0e4/volumes" Jan 23 16:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:31.997928 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=673a603f-a83d-437b-bf5e-7a95a63a17fa path="/var/lib/kubelet/pods/673a603f-a83d-437b-bf5e-7a95a63a17fa/volumes" Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.018384 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_haproxy-hub-master-0.workload.bos2.lab_04f654eda4f14a4bee64377a5c765343/haproxy-monitor/4.log" Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.018765 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_haproxy-hub-master-0.workload.bos2.lab_04f654eda4f14a4bee64377a5c765343/haproxy-monitor/3.log" Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.019137 8631 generic.go:296] "Generic (PLEG): container finished" podID=04f654eda4f14a4bee64377a5c765343 containerID="b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181" exitCode=1 Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.019153 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerDied Data:b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181} Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.019182 8631 scope.go:115] "RemoveContainer" containerID="51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02" Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:32.019376 8631 scope.go:115] "RemoveContainer" containerID="b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181" Jan 23 16:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:32.019603022Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=3c960d77-53d8-499a-ab39-7360d34ae208 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:32.019738868Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2 not found" id=3c960d77-53d8-499a-ab39-7360d34ae208 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:32.019923 8631 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"haproxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2\\\"\", failed to \"StartContainer\" for \"haproxy-monitor\" with CrashLoopBackOff: \"back-off 10s restarting failed container=haproxy-monitor pod=haproxy-hub-master-0.workload.bos2.lab_openshift-kni-infra(04f654eda4f14a4bee64377a5c765343)\"]" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" podUID=04f654eda4f14a4bee64377a5c765343 Jan 23 16:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:32.020003357Z" level=info msg="Removing container: 51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02" id=2442c50c-818d-45a7-aee2-72f4103ad042 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:15:32 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-0bd75505caf5cc00cd59a10fa1570972237dcadd9271c70d1a218c7651648dfb-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-0bd75505caf5cc00cd59a10fa1570972237dcadd9271c70d1a218c7651648dfb-merged.mount has successfully entered the 'dead' state. Jan 23 16:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:32.053723445Z" level=info msg="Removed container 51645b444a8f7e79c603689733867897b1ea208210c73b79778558d8f2825f02: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=2442c50c-818d-45a7-aee2-72f4103ad042 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:33.022641 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_haproxy-hub-master-0.workload.bos2.lab_04f654eda4f14a4bee64377a5c765343/haproxy-monitor/4.log" Jan 23 16:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:33.023313 8631 scope.go:115] "RemoveContainer" containerID="b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181" Jan 23 16:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:33.023506238Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=2b80cf4d-32a2-4ea5-82b5-14198e2e26e1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:33.023646489Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2 not found" id=2b80cf4d-32a2-4ea5-82b5-14198e2e26e1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:15:33.024044 8631 pod_workers.go:965] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"haproxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2\\\"\", failed to \"StartContainer\" for \"haproxy-monitor\" with CrashLoopBackOff: \"back-off 10s restarting failed container=haproxy-monitor pod=haproxy-hub-master-0.workload.bos2.lab_openshift-kni-infra(04f654eda4f14a4bee64377a5c765343)\"]" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" podUID=04f654eda4f14a4bee64377a5c765343 Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.380795476Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=017ef411-b5fa-46e0-938e-6f133b5d5064 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.381419093Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" id=30bcf2a3-3e8d-4e3b-984e-43dcd02c0325 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.382014444Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=79546dff-c749-4419-a8ec-d657d4438492 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.382263715Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" id=252317a7-4178-4635-adb1-f1e3bdf4114f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.383309310Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=79546dff-c749-4419-a8ec-d657d4438492 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.383550985Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd658985e73774e12344aab46b7bcce9a5f0c812276bcbc5c455f105ac9eedaf,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d],Size_:438699973,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=252317a7-4178-4635-adb1-f1e3bdf4114f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.383851071Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/setup" id=129deca1-0235-48da-9b99-381b19d0cfbc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.383927531Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.384028405Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/egress-router-binary-copy" id=76a31b32-cc50-4699-8e45-9c3767bd5277 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.384102401Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope. -- Subject: Unit crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope. -- Subject: Unit crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f. -- Subject: Unit crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa. -- Subject: Unit crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.546927282Z" level=info msg="Created container ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/setup" id=129deca1-0235-48da-9b99-381b19d0cfbc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.547395764Z" level=info msg="Starting container: ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f" id=e8987b8b-5cac-41f3-b600-5b41a0dd1de5 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.566562239Z" level=info msg="Started container" PID=9364 containerID=ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/setup id=e8987b8b-5cac-41f3-b600-5b41a0dd1de5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.585150653Z" level=info msg="Created container 0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa: openshift-multus/multus-additional-cni-plugins-7ks6h/egress-router-binary-copy" id=76a31b32-cc50-4699-8e45-9c3767bd5277 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.585502970Z" level=info msg="Starting container: 0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa" id=7d5ea83c-3b0f-4f82-a2c6-363cf74a3b8f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has successfully entered the 'dead' state. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope completed and consumed the indicated resources. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope has successfully entered the 'dead' state. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f.scope completed and consumed the indicated resources. Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.604325661Z" level=info msg="Started container" PID=9365 containerID=0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa description=openshift-multus/multus-additional-cni-plugins-7ks6h/egress-router-binary-copy id=7d5ea83c-3b0f-4f82-a2c6-363cf74a3b8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.608039473Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_986b54fe-de66-40dc-ac93-70ee8b6a6c07\"" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.619265877Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.619288231Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.623858010Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/egress-router\"" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.634348308Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.634368285Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:34.634382238Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_986b54fe-de66-40dc-ac93-70ee8b6a6c07\"" Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has successfully entered the 'dead' state. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope: Consumed 36ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope completed and consumed the indicated resources. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope has successfully entered the 'dead' state. Jan 23 16:15:34 hub-master-0.workload.bos2.lab systemd[1]: crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope: Consumed 43ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa.scope completed and consumed the indicated resources. Jan 23 16:15:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:35.027222 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa" exitCode=0 Jan 23 16:15:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:35.027270 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:0f32a7e3348e62d830a883f743822578b18a971cde8994b712dcb31cb276f2fa} Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.027928846Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" id=af938aaa-c2e8-43d2-8006-ed51b491e1b1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.028074180Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d not found" id=af938aaa-c2e8-43d2-8006-ed51b491e1b1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.028550937Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" id=8430265b-6748-470a-b5f3-8517b71f5e97 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:35.028563 8631 generic.go:296] "Generic (PLEG): container finished" podID=38eebeadc7ddc4d42d1de9a5e4ac69f1 containerID="ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f" exitCode=0 Jan 23 16:15:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:35.028580 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerDied Data:ca652c3a96947a38a3f925c9978793c83fc104283bd5d6623c323c1f287b137f} Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.028989876Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=07a0fba7-d16a-4568-9d09-bf9814cc5f3e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.030081722Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=07a0fba7-d16a-4568-9d09-bf9814cc5f3e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.030468579Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d\"" Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.030724165Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=7ea2ac43-5f3a-4507-bd88-d9f77acbf43b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.031566540Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7ea2ac43-5f3a-4507-bd88-d9f77acbf43b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.032563819Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-ensure-env-vars" id=7320bb7a-4946-4d22-ab26-9b1e7a49f494 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.032639577Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope. -- Subject: Unit crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3. -- Subject: Unit crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.171758705Z" level=info msg="Created container 7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-ensure-env-vars" id=7320bb7a-4946-4d22-ab26-9b1e7a49f494 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.172242605Z" level=info msg="Starting container: 7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3" id=bc5308f0-a09b-44bd-a8e4-831a9b1887c7 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.193223303Z" level=info msg="Started container" PID=9496 containerID=7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3 description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-ensure-env-vars id=bc5308f0-a09b-44bd-a8e4-831a9b1887c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has successfully entered the 'dead' state. Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope: Consumed 22ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope completed and consumed the indicated resources. Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope has successfully entered the 'dead' state. Jan 23 16:15:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3.scope completed and consumed the indicated resources. Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.255249576Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:15:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:35.296230170Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d\"" Jan 23 16:15:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:36.031740 8631 generic.go:296] "Generic (PLEG): container finished" podID=38eebeadc7ddc4d42d1de9a5e4ac69f1 containerID="7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3" exitCode=0 Jan 23 16:15:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:36.031920 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerDied Data:7cd77e7c9f8ffc958ef1aaf39ee26365f603de1ced63722f713f32f264e3a0c3} Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.032293091Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=a6ccc78b-588d-4b68-b9c4-3b86ed3cd15f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.033359008Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a6ccc78b-588d-4b68-b9c4-3b86ed3cd15f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.034143361Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=0ee700dc-703b-4a90-81c9-cc4ecadc3aa1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.035186045Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0ee700dc-703b-4a90-81c9-cc4ecadc3aa1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.035718822Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-resources-copy" id=b43b4ba2-d64d-4015-82db-e41214b7dc86 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.035796248Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope. -- Subject: Unit crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac. -- Subject: Unit crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: NetworkManager-dispatcher.service: Consumed 4.800s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service completed and consumed the indicated resources. Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.196767874Z" level=info msg="Created container 0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-resources-copy" id=b43b4ba2-d64d-4015-82db-e41214b7dc86 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.197201021Z" level=info msg="Starting container: 0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac" id=0ec232ce-1035-4051-aaf7-2ed19f7e63ab name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:36.216509520Z" level=info msg="Started container" PID=9551 containerID=0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-resources-copy id=0ec232ce-1035-4051-aaf7-2ed19f7e63ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has successfully entered the 'dead' state. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope completed and consumed the indicated resources. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope has successfully entered the 'dead' state. Jan 23 16:15:36 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac.scope completed and consumed the indicated resources. Jan 23 16:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:37.035685 8631 generic.go:296] "Generic (PLEG): container finished" podID=38eebeadc7ddc4d42d1de9a5e4ac69f1 containerID="0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac" exitCode=0 Jan 23 16:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:37.035713 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerDied Data:0408f728d15e66aa83d805b0627248f143ca5436860c10c1096adc942ab22aac} Jan 23 16:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:37.036169197Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=439332f8-dcf9-4e86-9ced-fd5f997a03ae name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:38.046027 8631 kubelet_node_status.go:590] "Recording event message for node" node="hub-master-0.workload.bos2.lab" event="NodeReady" Jan 23 16:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:39.128082238Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=439332f8-dcf9-4e86-9ced-fd5f997a03ae name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:39.128956769Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=ee778246-01a4-4d7f-a021-4189809cd279 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:40.460057649Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" id=ba01d32d-ee94-49cf-aeac-cd237e7aec5c name=/runtime.v1.ImageService/PullImage Jan 23 16:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:40.460883717Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" id=516c63f8-dc3d-427b-b207-816777a83a7e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:40.730306480Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ee778246-01a4-4d7f-a021-4189809cd279 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:40.731366796Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcdctl" id=11518e5d-f15f-4654-9caf-a3fec6c23908 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:40.731447092Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.083163 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh] Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.083211 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:42 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod1886664c_cb49_48f7_b263_eff19ad90869.slice. -- Subject: Unit kubepods-burstable-pod1886664c_cb49_48f7_b263_eff19ad90869.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod1886664c_cb49_48f7_b263_eff19ad90869.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.105946 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh] Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.164158 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-config\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.164199 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1886664c-cb49-48f7-b263-eff19ad90869-serving-cert\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.164232 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-client-ca\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.164304 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94767\" (UniqueName: \"kubernetes.io/projected/1886664c-cb49-48f7-b263-eff19ad90869-kube-api-access-94767\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.265537 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-config\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.265580 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1886664c-cb49-48f7-b263-eff19ad90869-serving-cert\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.265607 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-client-ca\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.265636 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-94767\" (UniqueName: \"kubernetes.io/projected/1886664c-cb49-48f7-b263-eff19ad90869-kube-api-access-94767\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.266104 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-client-ca\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.266310 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1886664c-cb49-48f7-b263-eff19ad90869-config\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.268008 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1886664c-cb49-48f7-b263-eff19ad90869-serving-cert\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.278933 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-94767\" (UniqueName: \"kubernetes.io/projected/1886664c-cb49-48f7-b263-eff19ad90869-kube-api-access-94767\") pod \"route-controller-manager-5fdd49db4f-5q9jh\" (UID: \"1886664c-cb49-48f7-b263-eff19ad90869\") " pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:42.398246 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:15:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:42.398750266Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:42.398793887Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.084412 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-controller-manager/controller-manager-876b6ffdf-x4gbg] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.084457 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.085081 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-apiserver/apiserver-746c4bf98c-9x4mg] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.085115 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.085884 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.085910 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.086370 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-authentication/oauth-openshift-868d5f6bf8-svlxj] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.086406 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.095999 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-controller-manager/controller-manager-876b6ffdf-x4gbg] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.097465 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls] Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.098172 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-authentication/oauth-openshift-868d5f6bf8-svlxj] Jan 23 16:15:43 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podf6df27f7_bd15_488a_8ec8_6a52e1a72ddd.slice. -- Subject: Unit kubepods-burstable-podf6df27f7_bd15_488a_8ec8_6a52e1a72ddd.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podf6df27f7_bd15_488a_8ec8_6a52e1a72ddd.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.098684 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-apiserver/apiserver-746c4bf98c-9x4mg] Jan 23 16:15:43 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod43afcd6c_e482_449b_986d_bd52ed16ad2b.slice. -- Subject: Unit kubepods-burstable-pod43afcd6c_e482_449b_986d_bd52ed16ad2b.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod43afcd6c_e482_449b_986d_bd52ed16ad2b.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:43 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-podb68fa2a4_e557_4154_b0c2_64f449cfd597.slice. -- Subject: Unit kubepods-burstable-podb68fa2a4_e557_4154_b0c2_64f449cfd597.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-podb68fa2a4_e557_4154_b0c2_64f449cfd597.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:43 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-burstable-pod69794e08_d62b_401c_8dea_a730bf37256a.slice. -- Subject: Unit kubepods-burstable-pod69794e08_d62b_401c_8dea_a730bf37256a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-burstable-pod69794e08_d62b_401c_8dea_a730bf37256a.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170383 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-policies\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170411 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-dir\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170431 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170451 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-error\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170469 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf8nb\" (UniqueName: \"kubernetes.io/projected/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-kube-api-access-nf8nb\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170488 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170504 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-serving-cert\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170521 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq56v\" (UniqueName: \"kubernetes.io/projected/b68fa2a4-e557-4154-b0c2-64f449cfd597-kube-api-access-mq56v\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170551 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-audit-policies\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170589 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170619 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170639 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69qv\" (UniqueName: \"kubernetes.io/projected/43afcd6c-e482-449b-986d-bd52ed16ad2b-kube-api-access-q69qv\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170657 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-proxy-ca-bundles\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170682 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-encryption-config\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170799 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-config\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170840 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-trusted-ca-bundle\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170917 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170949 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-client-ca\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.170971 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-trusted-ca-bundle\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171017 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-node-pullsecrets\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171044 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-serving-cert\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171071 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kgs\" (UniqueName: \"kubernetes.io/projected/69794e08-d62b-401c-8dea-a730bf37256a-kube-api-access-s7kgs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171109 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-serving-ca\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171177 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-session\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171258 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171285 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-serving-cert\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171314 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-serving-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171341 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-image-import-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171362 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-login\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171396 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171432 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-client\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171464 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit-dir\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171519 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69794e08-d62b-401c-8dea-a730bf37256a-audit-dir\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171549 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171567 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-client\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171603 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.171628 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-encryption-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272798 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-proxy-ca-bundles\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272829 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-encryption-config\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272848 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-config\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272876 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-trusted-ca-bundle\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272894 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272910 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-client-ca\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272927 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-serving-cert\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272943 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-trusted-ca-bundle\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272959 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-node-pullsecrets\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272977 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kgs\" (UniqueName: \"kubernetes.io/projected/69794e08-d62b-401c-8dea-a730bf37256a-kube-api-access-s7kgs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.272992 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-serving-ca\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273009 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-session\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273028 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273044 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-serving-cert\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273061 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-serving-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273078 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-image-import-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273095 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-client\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273112 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-login\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273130 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273145 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit-dir\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273162 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69794e08-d62b-401c-8dea-a730bf37256a-audit-dir\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273169 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-node-pullsecrets\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273183 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273201 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-client\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273235 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273254 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-encryption-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273273 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-policies\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273290 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-dir\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273310 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273331 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-error\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273349 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-nf8nb\" (UniqueName: \"kubernetes.io/projected/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-kube-api-access-nf8nb\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273371 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273387 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-serving-cert\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273405 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-mq56v\" (UniqueName: \"kubernetes.io/projected/b68fa2a4-e557-4154-b0c2-64f449cfd597-kube-api-access-mq56v\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273423 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273440 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273455 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-q69qv\" (UniqueName: \"kubernetes.io/projected/43afcd6c-e482-449b-986d-bd52ed16ad2b-kube-api-access-q69qv\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273471 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-audit-policies\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273568 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-proxy-ca-bundles\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273630 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-dir\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273630 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-client-ca\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273800 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-config\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273926 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-audit-policies\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.273984 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-service-ca\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274026 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-serving-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274116 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-trusted-ca-bundle\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274255 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-serving-ca\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274262 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69794e08-d62b-401c-8dea-a730bf37256a-audit-dir\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274307 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit-dir\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274333 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-image-import-ca\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274335 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-audit\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274475 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-trusted-ca-bundle\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274635 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274645 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43afcd6c-e482-449b-986d-bd52ed16ad2b-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274692 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274771 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68fa2a4-e557-4154-b0c2-64f449cfd597-audit-policies\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.274862 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-encryption-config\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.275111 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-etcd-client\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.275628 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-session\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.276081 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.276188 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-error\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.276690 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.276809 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-router-certs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.276882 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-user-template-login\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.277350 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-etcd-client\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.277636 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68fa2a4-e557-4154-b0c2-64f449cfd597-serving-cert\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.278032 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-serving-cert\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.278198 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-serving-cert\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.278256 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43afcd6c-e482-449b-986d-bd52ed16ad2b-encryption-config\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.279005 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69794e08-d62b-401c-8dea-a730bf37256a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.288098 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq56v\" (UniqueName: \"kubernetes.io/projected/b68fa2a4-e557-4154-b0c2-64f449cfd597-kube-api-access-mq56v\") pod \"apiserver-86c7cf6467-bbxls\" (UID: \"b68fa2a4-e557-4154-b0c2-64f449cfd597\") " pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.288113 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69qv\" (UniqueName: \"kubernetes.io/projected/43afcd6c-e482-449b-986d-bd52ed16ad2b-kube-api-access-q69qv\") pod \"apiserver-746c4bf98c-9x4mg\" (UID: \"43afcd6c-e482-449b-986d-bd52ed16ad2b\") " pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.288732 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kgs\" (UniqueName: \"kubernetes.io/projected/69794e08-d62b-401c-8dea-a730bf37256a-kube-api-access-s7kgs\") pod \"oauth-openshift-868d5f6bf8-svlxj\" (UID: \"69794e08-d62b-401c-8dea-a730bf37256a\") " pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.289030 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf8nb\" (UniqueName: \"kubernetes.io/projected/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd-kube-api-access-nf8nb\") pod \"controller-manager-876b6ffdf-x4gbg\" (UID: \"f6df27f7-bd15-488a-8ec8-6a52e1a72ddd\") " pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.403754 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.404100684Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.404142294Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.410631 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.411046108Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.411085444Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.418283 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.418602748Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.418634405Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:43.423851 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.424077875Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.424105172Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.996833237Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=0e772705-1cdc-4137-a792-40bdb0d695d8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.997005489Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a not found" id=0e772705-1cdc-4137-a792-40bdb0d695d8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:43.997414495Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=a8d01033-2fdb-4840-b715-47d13f08e330 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.042323371Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a\"" Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.296172410Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a\"" Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.996452190Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=7ccaa3d5-8c03-4201-be4f-ddc3d3b18fb1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.996526701Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=c51e3ed6-1000-4e82-bebf-d3430ea23fec name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.996782326Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac not found" id=c51e3ed6-1000-4e82-bebf-d3430ea23fec name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.997212417Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=dba95e05-ccb2-4f93-b756-d7387291102b name=/runtime.v1.ImageService/PullImage Jan 23 16:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:44.999398088Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac\"" Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.288979389Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac\"" Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.460176629Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b" id=8869046f-1591-413c-8215-e8401d4317fa name=/runtime.v1.ImageService/PullImage Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.460841937Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b" id=7a07963f-8cf0-44db-a6f7-0832f3719494 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.464230408Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb" id=97ed1404-78cf-4e14-b4ce-a50b2fece779 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.464451837Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a088397e1327afe55e515dacbd6c8baea0d644e5fadb9690a6c51c97e98387e,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f],Size_:417863925,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=516c63f8-dc3d-427b-b207-816777a83a7e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.464934000Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb" id=9494cdf3-e337-40c8-acea-14d4b29105d3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.465062637Z" level=info msg="Creating container: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns" id=483578e6-8a3f-4ee8-9491-3099d8a3e3fe name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:45.465134188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.605211653Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=cc36ce70-1d78-4c70-80ed-9e7ecd0d20b8 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.605752072Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=8157d0bd-d7eb-405a-ba8f-1d3b55bf0da7 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.606034565Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=02459f83-bdd3-4bff-9f3c-d6a44fa929e0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.606440565Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=a0c6aa32-bd95-44bd-89b2-dcdfaf81cf3d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.609497347Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1bdd18aabe32ed602f8564625d0cf3602bd96f29a928a99d2211e09b28a6f884,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4],Size_:538002756,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7ccaa3d5-8c03-4201-be4f-ddc3d3b18fb1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:46.609946454Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4" id=2e533c8b-a25c-4e07-ba9b-0549e93c7a2f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:47.996249 8631 scope.go:115] "RemoveContainer" containerID="b94ddf4c5c92785eae78168c4d7866c1340eaf7a2bff07a90e9b6f4d88590181" Jan 23 16:15:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:47.996593477Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=9505be31-d060-4ea4-b90b-28dff20b338b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:47.996754639Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2 not found" id=9505be31-d060-4ea4-b90b-28dff20b338b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:47.997180612Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=d402ac8f-2e9d-4a4f-a12f-3d54f415f792 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:47.999450988Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2\"" Jan 23 16:15:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:48.273685198Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2\"" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.041132033Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b83e5e61967d89fca7935527efce00d0a72edcd745abbcf51393ad64088a5407,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d05f6f7f9426edfc97bfe275521d1e885883a3ba274f390b013689403727edb],Size_:435635285,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9494cdf3-e337-40c8-acea-14d4b29105d3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.041628799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:71e4a2c91112e4d43a13a0bd4e826b2f4172f014ef1133b14bf845b16bde15e9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fc458ece66c8d4184b45b5c495a372a96b47432ae5a39844cd5837e3981685b],Size_:480807152,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7a07963f-8cf0-44db-a6f7-0832f3719494 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.041959713Z" level=info msg="Creating container: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived" id=4cf65ea3-7ec6-48cb-9d43-295e258d5df8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.042037078Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.045477154Z" level=info msg="Creating container: openshift-dns/node-resolver-9bshd/dns-node-resolver" id=7f1d6247-2b9d-4e7e-b8d6-5b9e18b54aa9 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.045547545Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.048532603Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b" id=bdb11cc7-f1e3-4517-85b5-a68536cd15b3 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.049350370Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b" id=d16fc5b9-65e4-4b93-9bac-c02fbb72223c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.050422071Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=02459f83-bdd3-4bff-9f3c-d6a44fa929e0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.050509030Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1bdd18aabe32ed602f8564625d0cf3602bd96f29a928a99d2211e09b28a6f884,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4],Size_:538002756,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a0c6aa32-bd95-44bd-89b2-dcdfaf81cf3d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.050551305Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1bdd18aabe32ed602f8564625d0cf3602bd96f29a928a99d2211e09b28a6f884,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6f9b6fdba34485dfdec1d31ca0a04a85eff54174688dc402692f78f46743ef4],Size_:538002756,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2e533c8b-a25c-4e07-ba9b-0549e93c7a2f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.050958054Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-jkffc/machine-config-daemon" id=f545af21-45ed-4774-9070-cd6026d1dd6c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.051020793Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.051040075Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-server-vpsv9/machine-config-server" id=765bc0b3-c835-46da-8a82-2755befa51aa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.051091072Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.051583332Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=5bd3fac4-7929-4e85-a952-5fed998fcf8d name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.051641560Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.056951611Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2781dd813ab50b5e0d6db152427c45df7e41ce57bc01b5633392b1937a7bacbd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b],Size_:542250120,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d16fc5b9-65e4-4b93-9bac-c02fbb72223c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.057499998Z" level=info msg="Creating container: openshift-machine-api/ironic-proxy-nhh2z/ironic-proxy" id=5d96c446-4cd3-47f6-b7ec-30792128bc6c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.057556070Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.567090297Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=9787ec68-f3a4-4868-9273-d8d0562062a1 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.567197468Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679" id=9868f4a4-43b6-4ea0-b656-8779ee6b7916 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.567276158Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=616829b4-85fb-4dbe-ba4c-6e316212f141 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.567300845Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=25832e30-4b35-4298-b1d5-0429f5c3c946 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.567956918Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=6c536df5-941f-47dd-b461-4dbd58437309 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.568290011Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=aa773e77-6ea6-4eb0-961e-2a1faaca12f9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.568641807Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=bcb1ebe6-28d8-47e9-af45-beee527306e9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.577642446Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/62a4f480-5302-4186-9243-131e0e30c82c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.577668583Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.581366664Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/50cbaa03-656f-44ae-a9fa-14729f05674c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.581392140Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.584746987Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679" id=5f5a0d7f-f88c-4e5c-8b9d-dcc10538d98b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.585116559Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a1984ae3-ce69-4aa4-a067-829ad707085e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.585139751Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.585626651Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/10f72223-299a-4e15-833e-6ef03c2ba59a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.585648932Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.834109573Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/14072687-8a39-43f4-aeda-907065e5f3a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.834135722Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:15:50 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5.scope. -- Subject: Unit crio-conmon-ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:50 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd.scope. -- Subject: Unit crio-conmon-b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:50 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd. -- Subject: Unit crio-b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:50 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5. -- Subject: Unit crio-ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:50 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.972350170Z" level=info msg="Created container b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns" id=483578e6-8a3f-4ee8-9491-3099d8a3e3fe name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.972797011Z" level=info msg="Starting container: b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd" id=7b499ef6-e7ff-435f-9ba5-97483a25ec5c name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:50.991107428Z" level=info msg="Started container" PID=9931 containerID=b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd description=openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns id=7b499ef6-e7ff-435f-9ba5-97483a25ec5c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.010416533Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=361dbbb6-1a1d-47cf-8796-5c6194d21063 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.010566170Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=361dbbb6-1a1d-47cf-8796-5c6194d21063 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.011083137Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=0120f6f2-9df4-4352-ae7c-b56df077962a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.011173449Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0120f6f2-9df4-4352-ae7c-b56df077962a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.011703983Z" level=info msg="Creating container: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns-monitor" id=e196d3a1-11a7-44e2-9c91-5649f79b3b49 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.011802041Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.019813172Z" level=info msg="Created container ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcdctl" id=11518e5d-f15f-4654-9caf-a3fec6c23908 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.020155713Z" level=info msg="Starting container: ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5" id=77976a5e-6352-49f6-9224-6a170266da87 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.038272027Z" level=info msg="Started container" PID=9945 containerID=ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5 description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcdctl id=77976a5e-6352-49f6-9224-6a170266da87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:51.056651 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" event=&{ID:5eb8d73fcd73cda1a9e34d91bb51e339 Type:ContainerStarted Data:b81ca3a0b0d3671f2f5de03b7cca6159cb912890943000863dcc8f77624d3cfd} Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.057157036Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=0fb77d59-ee5f-454a-9632-3556c5b506a9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.057412055Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0fb77d59-ee5f-454a-9632-3556c5b506a9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.058086767Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=87530747-b3ae-4589-ad0a-6ce45e37eaf4 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e.scope. -- Subject: Unit crio-conmon-0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.061267618Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=87530747-b3ae-4589-ad0a-6ce45e37eaf4 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.062427074Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd" id=1652878f-9f15-4a80-bfcb-77e7288cd001 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.062532824Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.065154525Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=aa773e77-6ea6-4eb0-961e-2a1faaca12f9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.065225308Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bcb1ebe6-28d8-47e9-af45-beee527306e9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.065260901Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=dba95e05-ccb2-4f93-b756-d7387291102b name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.065593649Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45e6f58e2808f3c415b498242b728ec8a2b4beb37b5b848bf705a5b3a0f08fe7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:946567bcda2161bc1f55a6aa236106c947c5d863225f024c8c46f19b91b71679],Size_:602228325,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5f5a0d7f-f88c-4e5c-8b9d-dcc10538d98b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066150448Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=b57b9fab-6511-48bb-a938-8a620fd94e24 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066214756Z" level=info msg="Creating container: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager" id=fd68cc13-6e16-4901-a577-311a6e5981e8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066286291Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066218854Z" level=info msg="Creating container: openshift-cluster-node-tuning-operator/tuned-4pckj/tuned" id=426c5a1e-46dd-49be-a0b2-4cb116c99ebc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066329905Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/setup" id=498bf271-e98f-46a6-bd91-a0a9e7235b67 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066402800Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.066343859Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.070419704Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" id=8430265b-6748-470a-b5f3-8517b71f5e97 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.070603288Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6c536df5-941f-47dd-b461-4dbd58437309 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.070989294Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=02a62e59-c8a6-4ed3-ab5c-6220df00f1d7 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071014622Z" level=info msg="Creating container: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/wait-for-host-port" id=9ea8ed4e-e844-42e6-8428-2f3407e1219b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071053856Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" id=9ab69c46-e617-40d8-b035-bab4a17b07d6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071077969Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071085049Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=9411990f-bcd3-4118-81e9-5898210b9edb name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071414934Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=6bbad06f-43da-48c3-8af5-7bb17fd65abc name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071604994Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=88845894-0c4f-4163-a1f1-65739ff10e95 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.071909535Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=a8d01033-2fdb-4840-b715-47d13f08e330 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.072361248Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a" id=35287983-ea0d-42b8-9742-86b55c5e1f40 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.073576385Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07393e2bf07f034d45fc866ab636f9893e2e27433fec0af7e85bfd9f16414f3c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac],Size_:332569518,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b57b9fab-6511-48bb-a938-8a620fd94e24 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.075776028Z" level=info msg="Creating container: openshift-monitoring/node-exporter-pbh26/init-textfile" id=16966fc4-b10e-45bd-9f52-8d24c4e2d2ee name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.076189652Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.077503133Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d9517d056d9d6704bcda5060e59d2042fe6d3ead8373d0e2d304d680bad1394c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d],Size_:574157423,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9ab69c46-e617-40d8-b035-bab4a17b07d6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.078185975Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/cni-plugins" id=ca402a03-6d83-4db0-b503-b3b7d88b0a3c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.078265706Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.078401454Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=88845894-0c4f-4163-a1f1-65739ff10e95 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.079042772Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/northd" id=78d14c15-1043-4f32-a309-44fdae688516 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.079106144Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.083062310Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6bbad06f-43da-48c3-8af5-7bb17fd65abc name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.083489080Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:af345ea6b73792e57541360abf1a2ab19f86bb57bdb0a9d4b1999c474f235558,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f04a30cd7a5b862c7b8f22001aef3eaef191eb24f4c737039d7061609a2955a],Size_:427884811,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=35287983-ea0d-42b8-9742-86b55c5e1f40 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.084983863Z" level=info msg="Creating container: openshift-image-registry/node-ca-2j9w6/node-ca" id=4331b782-8331-4ce2-81f9-50b0868ae517 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.085096778Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.085148128Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-controller" id=406864e9-3c8e-4807-888d-596ea4d5dc2c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.085303255Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.087983436Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=d402ac8f-2e9d-4a4f-a12f-3d54f415f792 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.088564660Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2" id=e3f6a24d-beb9-473e-828a-2c9e5ae4a7c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope. -- Subject: Unit crio-conmon-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e. -- Subject: Unit crio-0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03.scope. -- Subject: Unit crio-conmon-41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5.scope. -- Subject: Unit crio-conmon-baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.133005104Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fd146d7e43d8d51faaa837f331f5046da0bf04f148982baa2e216f0147996253,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:04cf677bb94e496d99394624e7a2334d96a87c86a3b11c5b698eb2c22ed1bcb2],Size_:431636232,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e3f6a24d-beb9-473e-828a-2c9e5ae4a7c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.134032852Z" level=info msg="Creating container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy" id=06c4632b-876e-4757-ba9b-8fdf09e030dd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.134126048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03. -- Subject: Unit crio-41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0. -- Subject: Unit crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope. -- Subject: Unit crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope. -- Subject: Unit crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5. -- Subject: Unit crio-baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e.scope. -- Subject: Unit crio-conmon-b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3.scope. -- Subject: Unit crio-conmon-734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd.scope. -- Subject: Unit crio-conmon-0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3. -- Subject: Unit crio-734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e. -- Subject: Unit crio-b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38. -- Subject: Unit crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da. -- Subject: Unit crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925.scope. -- Subject: Unit crio-conmon-abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.209548759Z" level=info msg="Created container 0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived" id=4cf65ea3-7ec6-48cb-9d43-295e258d5df8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.210258414Z" level=info msg="Starting container: 0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e" id=6793436e-4598-4d4a-a4fa-753895b104db name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35.scope. -- Subject: Unit crio-conmon-54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd. -- Subject: Unit crio-0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.222294409Z" level=info msg="Started container" PID=10060 containerID=0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e description=openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived id=6793436e-4598-4d4a-a4fa-753895b104db name=/runtime.v1.RuntimeService/StartContainer sandboxID=46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5 Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a.scope. -- Subject: Unit crio-conmon-8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347.scope. -- Subject: Unit crio-conmon-ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.230355639Z" level=info msg="Created container 166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0: openshift-machine-config-operator/machine-config-daemon-jkffc/machine-config-daemon" id=f545af21-45ed-4774-9070-cd6026d1dd6c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.230720774Z" level=info msg="Starting container: 166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0" id=fc124ced-48cc-4902-bba5-af882af206f3 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925. -- Subject: Unit crio-abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.231483838Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=8d3dd503-94f4-45dc-b151-1877fcde4c92 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.231703728Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8d3dd503-94f4-45dc-b151-1877fcde4c92 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.232400929Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=aa6334cb-ab9f-4fd0-bf74-fb27bd3c3460 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.232660599Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=aa6334cb-ab9f-4fd0-bf74-fb27bd3c3460 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope. -- Subject: Unit crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.233230108Z" level=info msg="Creating container: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived-monitor" id=4cbc2fda-685d-44b4-b61a-ae230ee6c094 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.233363707Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope. -- Subject: Unit crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35. -- Subject: Unit crio-54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.240533341Z" level=info msg="Started container" PID=10176 containerID=166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0 description=openshift-machine-config-operator/machine-config-daemon-jkffc/machine-config-daemon id=fc124ced-48cc-4902-bba5-af882af206f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.250470440Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=dc97d8ed-0504-4b7d-aea9-c37c1dbf52ab name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.250684385Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3 not found" id=dc97d8ed-0504-4b7d-aea9-c37c1dbf52ab name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.251164107Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=f020798a-7174-459b-af0c-9be6913d0b0c name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.253043671Z" level=info msg="Created container baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5: openshift-cluster-node-tuning-operator/tuned-4pckj/tuned" id=426c5a1e-46dd-49be-a0b2-4cb116c99ebc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.253403492Z" level=info msg="Starting container: baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5" id=6a83b6a7-aed1-46a4-b0e6-0f61269d94ca name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.253708937Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.254657714Z" level=info msg="Created container 41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03: openshift-machine-config-operator/machine-config-server-vpsv9/machine-config-server" id=765bc0b3-c835-46da-8a82-2755befa51aa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.255130422Z" level=info msg="Starting container: 41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03" id=786fcd9f-71a3-416d-ab08-9a3b3ef36c2c name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.263989841Z" level=info msg="Started container" PID=10172 containerID=41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03 description=openshift-machine-config-operator/machine-config-server-vpsv9/machine-config-server id=786fcd9f-71a3-416d-ab08-9a3b3ef36c2c name=/runtime.v1.RuntimeService/StartContainer sandboxID=47feb76995838b353c3f736eacec3a5a4a678f77ea73390106cb5e1d6193debd Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.264665491Z" level=info msg="Started container" PID=10209 containerID=baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5 description=openshift-cluster-node-tuning-operator/tuned-4pckj/tuned id=6a83b6a7-aed1-46a4-b0e6-0f61269d94ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9043f79ed9fe07655c793ca367554a40eef242c8126e361ece6172594d6895f Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c.scope. -- Subject: Unit crio-conmon-e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31. -- Subject: Unit crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope. -- Subject: Unit crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e. -- Subject: Unit crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347. -- Subject: Unit crio-ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.273343970Z" level=info msg="Created container 734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3: openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns-monitor" id=e196d3a1-11a7-44e2-9c91-5649f79b3b49 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a. -- Subject: Unit crio-8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.273687927Z" level=info msg="Starting container: 734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3" id=e0d76c3b-103c-4984-a17a-afa4a2b31775 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Reloading. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.284409489Z" level=info msg="Created container 0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38: openshift-monitoring/node-exporter-pbh26/init-textfile" id=16966fc4-b10e-45bd-9f52-8d24c4e2d2ee name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.284817094Z" level=info msg="Starting container: 0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38" id=dceb3fad-5ea0-4fa9-ac85-bfcc5bf9680b name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.286670530Z" level=info msg="Started container" PID=10284 containerID=734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3 description=openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab/coredns-monitor id=e0d76c3b-103c-4984-a17a-afa4a2b31775 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8456cad41ba97a04aeda7d023140cd9e70ca71ca5b7791529fbe81e3887613f8 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.287181444Z" level=info msg="Created container b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e: openshift-dns/node-resolver-9bshd/dns-node-resolver" id=7f1d6247-2b9d-4e7e-b8d6-5b9e18b54aa9 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.287649326Z" level=info msg="Starting container: b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e" id=7e7925d0-1485-4c82-a547-3c1443d66c8e name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.295120367Z" level=info msg="Started container" PID=10291 containerID=0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38 description=openshift-monitoring/node-exporter-pbh26/init-textfile id=dceb3fad-5ea0-4fa9-ac85-bfcc5bf9680b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.298242885Z" level=info msg="Started container" PID=10300 containerID=b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e description=openshift-dns/node-resolver-9bshd/dns-node-resolver id=7e7925d0-1485-4c82-a547-3c1443d66c8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=8dd056b754da8a49246e3d7fc9fae2fc653e702f35aec269e12c16cef53eadc1 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.302438231Z" level=info msg="Created container 0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd: openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-controller" id=406864e9-3c8e-4807-888d-596ea4d5dc2c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.302453204Z" level=info msg="Created container 274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da: openshift-multus/multus-cdt6c/kube-multus" id=5bd3fac4-7929-4e85-a952-5fed998fcf8d name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.302857412Z" level=info msg="Starting container: 274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da" id=467b1407-e613-44ca-9252-b34c0f113d0e name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.302884869Z" level=info msg="Starting container: 0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd" id=14de7f58-34a4-4680-add4-470e94f9b56b name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.312628796Z" level=info msg="Started container" PID=10392 containerID=0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-controller id=14de7f58-34a4-4680-add4-470e94f9b56b name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.313113264Z" level=info msg="Started container" PID=10337 containerID=274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da description=openshift-multus/multus-cdt6c/kube-multus id=467b1407-e613-44ca-9252-b34c0f113d0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.317990659Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_bb4f74fb-bb7c-43e8-8262-9a0000894cee\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.321433494Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=81c9a7c5-7770-41ec-bfdb-93fba45cc899 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.322842055Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=81c9a7c5-7770-41ec-bfdb-93fba45cc899 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.323752843Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=43f03a91-e34a-430e-9521-b5b4169b8fee name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.324951742Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=43f03a91-e34a-430e-9521-b5b4169b8fee name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.325571653Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-acl-logging" id=0b1ea728-2bc2-496d-b483-270291cc8a49 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.325640794Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.329971482Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.329992861Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.342142633Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.352725243Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.352742974Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.352754759Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_bb4f74fb-bb7c-43e8-8262-9a0000894cee\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.374720172Z" level=info msg="Created container abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925: openshift-ovn-kubernetes/ovnkube-master-fld8m/northd" id=78d14c15-1043-4f32-a309-44fdae688516 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.375180379Z" level=info msg="Starting container: abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925" id=4330fc6f-f562-4341-b1ba-19a651ac7d27 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.381604660Z" level=info msg="Started container" PID=10449 containerID=abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925 description=openshift-ovn-kubernetes/ovnkube-master-fld8m/northd id=4330fc6f-f562-4341-b1ba-19a651ac7d27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.388406059Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=308a203b-1f0e-49b0-9466-fac5324b595a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.389533219Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=308a203b-1f0e-49b0-9466-fac5324b595a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.390221684Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=17ec9f1d-c260-4d04-aa51-347a0b65198e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.391475282Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=17ec9f1d-c260-4d04-aa51-347a0b65198e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.393043676Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/nbdb" id=08c93369-4122-475b-87f6-f906dcabf359 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.393106680Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00090|bridge|INFO|bridge br-ex: added interface patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int on port 2 Jan 23 16:15:51 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490551.4019] manager: (patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28) Jan 23 16:15:51 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490551.4030] manager: (patch-br-ex_hub-master-0.workload.bos2.lab-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29) Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope: Consumed 122ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope: Consumed 43ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172.scope. -- Subject: Unit crio-conmon-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7.scope. -- Subject: Unit crio-conmon-9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37.scope. -- Subject: Unit crio-conmon-b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c. -- Subject: Unit crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c. -- Subject: Unit crio-e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a.scope. -- Subject: Unit crio-conmon-c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.494943131Z" level=info msg="Created container 24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/setup" id=498bf271-e98f-46a6-bd91-a0a9e7235b67 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.494938598Z" level=info msg="Created container 54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd" id=1652878f-9f15-4a80-bfcb-77e7288cd001 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.495499760Z" level=info msg="Starting container: 54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35" id=3c70120e-e3a6-4a88-ac89-c13788bfbb98 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.495586398Z" level=info msg="Starting container: 24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e" id=179c3b0f-d460-411a-9841-52a0bff40b51 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37. -- Subject: Unit crio-b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7. -- Subject: Unit crio-9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172. -- Subject: Unit crio-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a. -- Subject: Unit crio-c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.503334884Z" level=info msg="Started container" PID=10516 containerID=54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35 description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd id=3c70120e-e3a6-4a88-ac89-c13788bfbb98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.503803910Z" level=info msg="Started container" PID=10634 containerID=24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/setup id=179c3b0f-d460-411a-9841-52a0bff40b51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.511965512Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=2f9fbe11-6b11-4f9c-bac0-b6a2d54b7e59 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.512129120Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2f9fbe11-6b11-4f9c-bac0-b6a2d54b7e59 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.512684875Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020" id=269143eb-5910-4478-8df2-2bee9cc9f5c6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.512787275Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ef7098b791ba1da50f9b969abcf6ce813b9277772890b5c0da9240df6fd081b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24d9b9d9d7fadacbc505c849a1e4b390b2f0fcd452ad851b7cce21e8cfec2020],Size_:424328496,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=269143eb-5910-4478-8df2-2bee9cc9f5c6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.513543522Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-metrics" id=e2304792-d6a9-477d-8c31-5dc796ae8045 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.513605701Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Starting rpm-ostree System Management Daemon... -- Subject: Unit rpm-ostreed.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpm-ostreed.service has begun starting up. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.534790725Z" level=info msg="Created container e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c: openshift-machine-api/ironic-proxy-nhh2z/ironic-proxy" id=5d96c446-4cd3-47f6-b7ec-30792128bc6c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.535319118Z" level=info msg="Starting container: e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c" id=3c05b1fd-a9a3-4577-9150-2fb66d7e660f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.539202763Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.547166818Z" level=info msg="Started container" PID=11168 containerID=e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c description=openshift-machine-api/ironic-proxy-nhh2z/ironic-proxy id=3c05b1fd-a9a3-4577-9150-2fb66d7e660f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3356e4ad1b668d8247a1c1445668566ae4738fa934546baaa9bad867ca9a7563 Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Reading config file '/etc/rpm-ostreed.conf' Jan 23 16:15:51 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.383' (uid=0 pid=11274 comm="/usr/bin/rpm-ostree start-daemon " label="system_u:system_r:install_t:s0") Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608127222Z" level=info msg="Created container ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager" id=fd68cc13-6e16-4901-a577-311a6e5981e8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608199716Z" level=info msg="Created container 8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy" id=06c4632b-876e-4757-ba9b-8fdf09e030dd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608329814Z" level=info msg="Created container e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/wait-for-host-port" id=9ea8ed4e-e844-42e6-8428-2f3407e1219b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608653121Z" level=info msg="Starting container: ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347" id=8051b8d1-d1da-4631-87f1-25879a899183 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608680726Z" level=info msg="Starting container: 8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a" id=859f3aab-109a-4f3c-a38c-37f7715f5b03 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.608743786Z" level=info msg="Starting container: e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31" id=688998c8-cba3-4cdf-bf4d-961a08bcbf85 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.615131424Z" level=info msg="Started container" PID=10657 containerID=8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a description=openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy id=859f3aab-109a-4f3c-a38c-37f7715f5b03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.615264516Z" level=info msg="Started container" PID=10637 containerID=ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347 description=openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager id=8051b8d1-d1da-4631-87f1-25879a899183 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.615789025Z" level=info msg="Started container" PID=10644 containerID=e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31 description=openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/wait-for-host-port id=688998c8-cba3-4cdf-bf4d-961a08bcbf85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.622682002Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=2c0ccfda-608c-4e86-9f6a-e44b694b7504 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.622693911Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52" id=0a655c29-3ba7-43aa-b92d-3ecefc73bb0d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.622900256Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2c0ccfda-608c-4e86-9f6a-e44b694b7504 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.622908451Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52 not found" id=0a655c29-3ba7-43aa-b92d-3ecefc73bb0d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.623293555Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52" id=b7f2569b-4570-4989-87ef-b37845e47d43 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.623724333Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67" id=0c1a463a-11da-4d98-bc04-9a0354cb59ac name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.623799840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ecc0cdc6ecc65607d63a1847e235f4988c104b07e680c0eed8b2fc0e5c20d934,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b61429030b790e6ec6e9fcb52b2a17c5b794815d6da9806bc563bc45e84aa67],Size_:650276009,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0c1a463a-11da-4d98-bc04-9a0354cb59ac name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.624314046Z" level=info msg="Creating container: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=12b3d8e7-9612-42fd-9a60-e5ea03939ebc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.624335634Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.624400663Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.626402481Z" level=info msg="Created container 01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c: openshift-multus/multus-additional-cni-plugins-7ks6h/cni-plugins" id=ca402a03-6d83-4db0-b503-b3b7d88b0a3c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.626620136Z" level=info msg="Starting container: 01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c" id=1c95e513-fc6f-474a-a2f5-0a0234e22d83 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.632645290Z" level=info msg="Started container" PID=11158 containerID=01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c description=openshift-multus/multus-additional-cni-plugins-7ks6h/cni-plugins id=1c95e513-fc6f-474a-a2f5-0a0234e22d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.637480391Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_744cbfa8-747c-496c-96e9-c7180935006a\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.648577600Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.648604663Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.669789349Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bandwidth\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.679778353Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.679806661Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.679824375Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bridge\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.688786827Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.688804400Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.688819802Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/dhcp\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.697232472Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.697253419Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.697266939Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/firewall\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.705634944Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.705652214Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.705661566Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-device\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope: Consumed 41ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope: Consumed 37ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope: Consumed 41ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope: Consumed 63ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope has successfully entered the 'dead' state. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope: Consumed 42ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c.scope completed and consumed the indicated resources. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.714129359Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.714146075Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.714154826Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-local\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.722887753Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.722916445Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.722932701Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ipvlan\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3.scope. -- Subject: Unit crio-conmon-17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Starting Authorization Manager... -- Subject: Unit polkit.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit polkit.service has begun starting up. Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25.scope. -- Subject: Unit crio-conmon-6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.734393247Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.734586533Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.734604078Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/loopback\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25. -- Subject: Unit crio-6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.744291096Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.744319296Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.744339096Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/macvlan\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3. -- Subject: Unit crio-17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.748111360Z" level=info msg="Created container c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a: openshift-image-registry/node-ca-2j9w6/node-ca" id=4331b782-8331-4ce2-81f9-50b0868ae517 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.748534020Z" level=info msg="Starting container: c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a" id=09bf5b02-2195-4cd6-8fe4-8bcf1d3704d4 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.748911082Z" level=info msg="Created container b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37: openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived-monitor" id=4cbc2fda-685d-44b4-b61a-ae230ee6c094 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.749173282Z" level=info msg="Starting container: b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37" id=fe55a863-42b3-4d84-beaf-dfc3e432dd2d name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.754577258Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.754600407Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.754613413Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/portmap\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.756054530Z" level=info msg="Started container" PID=11278 containerID=c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a description=openshift-image-registry/node-ca-2j9w6/node-ca id=09bf5b02-2195-4cd6-8fe4-8bcf1d3704d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09800a84b987461f550d247ef6464a0986fe8ff5e6d4c93c478c84298037f1c2 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.757121205Z" level=info msg="Started container" PID=11261 containerID=b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37 description=openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab/keepalived-monitor id=fe55a863-42b3-4d84-beaf-dfc3e432dd2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=46776229e966aaf0cd0c958b2e048b32ae5c8adb2af3d0d1833ad7bc56fef6c5 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.765194300Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.765233756Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.765252307Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ptp\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.775903765Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.775930482Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.775945642Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/sbr\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab polkitd[11681]: Started polkitd version 0.115 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.786588258Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.786612933Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.786629413Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/static\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.796143917Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.796169768Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.796183385Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/tuning\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.808248029Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.808278107Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.808292975Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vlan\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812118994Z" level=info msg="Created container 9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7: openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-acl-logging" id=0b1ea728-2bc2-496d-b483-270291cc8a49 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812120434Z" level=info msg="Created container eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172: openshift-ovn-kubernetes/ovnkube-master-fld8m/nbdb" id=08c93369-4122-475b-87f6-f906dcabf359 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812231039Z" level=info msg="Created container 6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-metrics" id=e2304792-d6a9-477d-8c31-5dc796ae8045 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812705354Z" level=info msg="Starting container: 9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7" id=409f1940-7956-4b64-861c-1b2aaab4c9b3 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812729643Z" level=info msg="Starting container: eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172" id=ff0f6d5d-fbdd-4295-bd13-1e1d95d57249 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.812734901Z" level=info msg="Starting container: 6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25" id=9d39a979-93f0-470b-a877-07b4c542c659 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.815312449Z" level=info msg="Created container 17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3: openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor" id=12b3d8e7-9612-42fd-9a60-e5ea03939ebc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.815648006Z" level=info msg="Starting container: 17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3" id=cc729f04-be61-4d5b-8cef-88e3adeb96f2 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.818278754Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.818303939Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.818317644Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vrf\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.820067061Z" level=info msg="Started container" PID=11265 containerID=9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovn-acl-logging id=409f1940-7956-4b64-861c-1b2aaab4c9b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.820441268Z" level=info msg="Started container" PID=11262 containerID=eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172 description=openshift-ovn-kubernetes/ovnkube-master-fld8m/nbdb id=ff0f6d5d-fbdd-4295-bd13-1e1d95d57249 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.820824826Z" level=info msg="Started container" PID=11784 containerID=6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25 description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-metrics id=9d39a979-93f0-470b-a877-07b4c542c659 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:15:51 hub-master-0.workload.bos2.lab polkitd[11681]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 16:15:51 hub-master-0.workload.bos2.lab polkitd[11681]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.823925436Z" level=info msg="Started container" PID=11783 containerID=17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3 description=openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab/haproxy-monitor id=cc729f04-be61-4d5b-8cef-88e3adeb96f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc35440e2de690c5cc9aedcb3596da9c3182f41e9b02a81971713bcd29d4da7 Jan 23 16:15:51 hub-master-0.workload.bos2.lab polkitd[11681]: Finished loading, compiling and executing 3 rules Jan 23 16:15:51 hub-master-0.workload.bos2.lab dbus-daemon[2917]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.828287640Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=191ff854-b68d-486c-868c-7159f593927c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started Authorization Manager. -- Subject: Unit polkit.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit polkit.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.828527780Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6 not found" id=191ff854-b68d-486c-868c-7159f593927c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab polkitd[11681]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.828976283Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=4fc30815-0ffc-43b8-847c-54989b0401fb name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829147349Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829181555Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829215489Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_744cbfa8-747c-496c-96e9-c7180935006a\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829312089Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d" id=7ac01e1f-ac72-4f04-beeb-3d8ee50ec58a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829447599Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d not found" id=7ac01e1f-ac72-4f04-beeb-3d8ee50ec58a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829863906Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d" id=ffaf7569-66da-4890-aba0-89ce685ce194 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.829961174Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.830528987Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab systemd[1]: Started rpm-ostree System Management Daemon. -- Subject: Unit rpm-ostreed.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit rpm-ostreed.service has finished starting up. -- -- The start-up result is done. Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: In idle state; will auto-exit in 64 seconds Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:cli dbus:1.390 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) added; new total=1 Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:cli dbus:1.390 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) vanished; remaining=0 Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: In idle state; will auto-exit in 63 seconds Jan 23 16:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:51.904171722Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52\"" Jan 23 16:15:51 hub-master-0.workload.bos2.lab root[12105]: machine-config-daemon[10176]: Starting to manage node: hub-master-0.workload.bos2.lab Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.391 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) added; new total=1 Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.391 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) vanished; remaining=0 Jan 23 16:15:51 hub-master-0.workload.bos2.lab rpm-ostree[11274]: In idle state; will auto-exit in 61 seconds Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.061388 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9bshd" event=&{ID:839425af-4ad1-4627-b58f-20197745cb4a Type:ContainerStarted Data:b0d8b94034be985f3274efd2379eeb284adcb3dc837b687706b7e1029a0d411e} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.062602 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:9ea1efffe9818aff80a1dc30563b2b81f75ea1f77cc56160c14755222ddb19a7} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.062626 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:0f87c0cea051ef56b41874c944da25369f0b7f16a13b76aff1650655b3b1f0bd} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.063718 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.064870 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" event=&{ID:841c556dbc6afe45e33a42a9dd8b5492 Type:ContainerStarted Data:b5f0b7bad06e28261dccad1cdf67dd614addb9e37e632beae72047ce5c5edc37} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.064887 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" event=&{ID:841c556dbc6afe45e33a42a9dd8b5492 Type:ContainerStarted Data:0107b8fbdb876cf074786d0d9f1e25038b471efbb73f0b761dc97a087e325c9e} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.065587 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/ironic-proxy-nhh2z" event=&{ID:dd7e23a1-2620-491c-a453-b41708d2e0d7 Type:ContainerStarted Data:e4e094a0d84ba1e4cb3b956cbc2949f41425e30de8823497a4e64cd9ccf4065c} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.067571 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kni-infra_haproxy-hub-master-0.workload.bos2.lab_04f654eda4f14a4bee64377a5c765343/haproxy-monitor/4.log" Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.067776 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerStarted Data:17d8de92afbc22f96396d52e77bc90cdac1aff8819fb29068d17133ee4d6a6a3} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.067799 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" event=&{ID:04f654eda4f14a4bee64377a5c765343 Type:ContainerStarted Data:8516868f79512ead7e0711c7bf29ce5f64670311118cabbf48ada8b3e1b13e0a} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.072214 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c" exitCode=0 Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.072260 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:01b88387c40f52b2440129d740ccc81df27e633b2ef034f38706dd86c6772e4c} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.072997068Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" id=60719a02-da57-488b-b8e3-364c05cfb1e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.073273750Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee not found" id=60719a02-da57-488b-b8e3-364c05cfb1e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.073658651Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" id=91bccca7-b791-4aa6-b09f-a7b09cba2f18 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.074511653Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.074594 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerStarted Data:ffe37200e4ea1a25d48f214d4c7d3df0e844b3986879db720e47cc2f78629ca5} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.074612 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerStarted Data:6a66551404678870db6e361cc999d1e166415c96db7a58d58668618379784a25} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.074622 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerStarted Data:54941a64a9b064efacd8f3eec8d82fd08784f593cadec9a517a4f6c8dfbc8f35} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.075117 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vpsv9" event=&{ID:d7b22547-215c-4758-8154-a3bfc577ec12 Type:ContainerStarted Data:41796ba377736e44b76075ebddcc5ce94f17cd9325502be8b7e6f7b51f086e03} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.075757 8631 generic.go:296] "Generic (PLEG): container finished" podID=77321459d336b7d15305c9b9a83e4081 containerID="e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31" exitCode=0 Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.075792 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" event=&{ID:77321459d336b7d15305c9b9a83e4081 Type:ContainerDied Data:e8c942e7b2f63f3fcfd6c34d71239ecfb4db41a88e22a0a2a9c1dce2ace8ac31} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.076160152Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=ece04133-4b63-438a-aa9f-edfc16366259 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.076308 8631 generic.go:296] "Generic (PLEG): container finished" podID=9552ff413d8390655360ce968177c622 containerID="24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e" exitCode=0 Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.076333 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerDied Data:24d5aec5b61ea47eaed2d58e4ec516e89bc5acc886223fdb0e856f09efce051e} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.076651856Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=0130a360-e785-43d0-9fc3-4f11e0f744da name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.077252998Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ece04133-4b63-438a-aa9f-edfc16366259 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.077591 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" event=&{ID:5eb8d73fcd73cda1a9e34d91bb51e339 Type:ContainerStarted Data:734fe6a51034d3c47e92ae12f6634673ace009a4b2b33c0698b4c0b6ecd887d3} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.077701488Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0130a360-e785-43d0-9fc3-4f11e0f744da name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.077800086Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=3904a81d-1dec-48a2-817c-883b16ea3c1e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.078051617Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1" id=617600c8-ab83-4cc7-bfd0-224c526a4a8b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.078424 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.078441 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:abed3b6f6a8317eb3f59febf5fb09ccaa8e19de616d5de6c6d2e9b1214f4c925} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.078716448Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3904a81d-1dec-48a2-817c-883b16ea3c1e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.078977 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" event=&{ID:b8e918bfaafef0fc7d13026942c43171 Type:ContainerStarted Data:ae688baa1e37e2f963b26ae00156dcb127051befa75e5c7682b44e2bb5129347} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.078990407Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3dbfc7bb68771178c98466c575b946fc79e7dc4b682503d068c6eee99ef4f90f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:662c00a50b8327cc39963577d3e11aa71458b3888ce06223a4501679a28fecd1],Size_:837743627,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=617600c8-ab83-4cc7-bfd0-224c526a4a8b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.079586474Z" level=info msg="Creating container: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler" id=8ab6650b-f200-495f-8ed9-2c6162ad1455 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.079586876Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver" id=8a7ad14d-e06b-4e74-9bbe-ad3d5d33b93f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.079663157Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.079670131Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.079680 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" event=&{ID:612bc2d6-261c-4dc3-9902-489a4589ec9b Type:ContainerStarted Data:166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.080382 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-4pckj" event=&{ID:16d2550a-6aa8-453b-9d72-f50466ef11b2 Type:ContainerStarted Data:baf7c92167bc083c00cc31c879b4a090ef67653d4f8c7929f65d3ae99ffb35f5} Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.080936 8631 generic.go:296] "Generic (PLEG): container finished" podID=ff6a907c-8dc5-4524-b928-d97ba7b430c3 containerID="0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38" exitCode=0 Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.080964 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pbh26" event=&{ID:ff6a907c-8dc5-4524-b928-d97ba7b430c3 Type:ContainerDied Data:0a0ce0774328b3075d2dd7b5d07b16cc9f3c45742db1f0a414e50310d70f5b38} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.081342255Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=5088cea6-5045-4581-8243-9035068b6904 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:52.081434 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2j9w6" event=&{ID:5ced4aec-1711-4abf-825a-c546047148b7 Type:ContainerStarted Data:c6f60dc53a8aa6497437e8e3767adfc62df98a1b14c3a45eecfac826986f5c6a} Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.089733982Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07393e2bf07f034d45fc866ab636f9893e2e27433fec0af7e85bfd9f16414f3c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac],Size_:332569518,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5088cea6-5045-4581-8243-9035068b6904 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.090193144Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" id=4689fb52-96bf-48f9-8ce7-6c51ae08e87a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.091546081Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07393e2bf07f034d45fc866ab636f9893e2e27433fec0af7e85bfd9f16414f3c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac],Size_:332569518,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4689fb52-96bf-48f9-8ce7-6c51ae08e87a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.092048645Z" level=info msg="Creating container: openshift-monitoring/node-exporter-pbh26/node-exporter" id=3b0a51b6-941a-4841-8a38-0891e8e046ec name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.092112703Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.100680990Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948.scope. -- Subject: Unit crio-conmon-986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.115774890Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948. -- Subject: Unit crio-986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb.scope. -- Subject: Unit crio-conmon-7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936.scope. -- Subject: Unit crio-conmon-10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb. -- Subject: Unit crio-7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936. -- Subject: Unit crio-10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.220964455Z" level=info msg="Created container 986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948: openshift-monitoring/node-exporter-pbh26/node-exporter" id=3b0a51b6-941a-4841-8a38-0891e8e046ec name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.221547519Z" level=info msg="Starting container: 986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948" id=f06fe99c-5ab7-4b3b-9c3c-2f463a0cce40 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.227636811Z" level=info msg="Created container 10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver" id=8a7ad14d-e06b-4e74-9bbe-ad3d5d33b93f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.227899515Z" level=info msg="Starting container: 10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936" id=bf07763e-f928-4843-a813-82678e714667 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.227964565Z" level=info msg="Started container" PID=12131 containerID=986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948 description=openshift-monitoring/node-exporter-pbh26/node-exporter id=f06fe99c-5ab7-4b3b-9c3c-2f463a0cce40 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9 Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.234463231Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=51dc1718-7de5-4db3-9c5c-dff0ffa51692 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.234597686Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6 not found" id=51dc1718-7de5-4db3-9c5c-dff0ffa51692 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.234875874Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=3db3f7ed-d1d2-4551-8c04-e1d09493880f name=/runtime.v1.ImageService/PullImage Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.235059518Z" level=info msg="Started container" PID=12175 containerID=10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936 description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver id=bf07763e-f928-4843-a813-82678e714667 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.235712661Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.242231624Z" level=info msg="Created container 7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler" id=8ab6650b-f200-495f-8ed9-2c6162ad1455 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.242765894Z" level=info msg="Starting container: 7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb" id=d2414b2f-e31f-4d50-b550-6dc9be537f85 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.242898438Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=d2b635e2-0f49-4b9f-afb1-04d729a8c005 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.243012630Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018 not found" id=d2b635e2-0f49-4b9f-afb1-04d729a8c005 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.243218549Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=6af69a5c-1ff4-4f3b-9e3a-201f9d69a34f name=/runtime.v1.ImageService/PullImage Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.243933982Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.249021466Z" level=info msg="Started container" PID=12165 containerID=7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb description=openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler id=d2414b2f-e31f-4d50-b550-6dc9be537f85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.255106986Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=3dc67db4-13f1-463d-92cc-ad8dd8280779 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.255221629Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651 not found" id=3dc67db4-13f1-463d-92cc-ad8dd8280779 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.255517802Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=7a8f653a-a4f6-4ce2-b400-d3668cf33e71 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.256288977Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.497333128Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.545229668Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.639236024Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=f020798a-7174-459b-af0c-9be6913d0b0c name=/runtime.v1.ImageService/PullImage Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.639901131Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" id=d1da89ff-016d-4c5b-a98d-192118177bf5 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.640591784Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9f1e53f37e7b7a4b1c73cd1beb63fb311820b9d1be1d7f481dc41a7f7ae466c8,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3],Size_:352981509,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d1da89ff-016d-4c5b-a98d-192118177bf5 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.641083886Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-jkffc/oauth-proxy" id=a72a1d6f-4bab-4f31-ae3c-30914c1612aa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.641159074Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029.scope. -- Subject: Unit crio-conmon-4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.656025694Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651\"" Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029. -- Subject: Unit crio-4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:15:52 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.775953019Z" level=info msg="Created container 4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029: openshift-machine-config-operator/machine-config-daemon-jkffc/oauth-proxy" id=a72a1d6f-4bab-4f31-ae3c-30914c1612aa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.776343961Z" level=info msg="Starting container: 4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029" id=42106138-b3d8-42d4-93ce-f1aa8477ba87 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:15:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:52.795971195Z" level=info msg="Started container" PID=12324 containerID=4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029 description=openshift-machine-config-operator/machine-config-daemon-jkffc/oauth-proxy id=42106138-b3d8-42d4-93ce-f1aa8477ba87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ad25ee8d88b9ea4bf65ebcb8e94ffc345f93b4faabdf223385a04740aa28e19 Jan 23 16:15:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:53.084836 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jkffc" event=&{ID:612bc2d6-261c-4dc3-9902-489a4589ec9b Type:ContainerStarted Data:4e78e22714e61793c26dfe2468eee3064c74cc801978254c5b753a82884ad029} Jan 23 16:15:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:53.085968 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:10be3d2d74176a299008b87a1f73d4f03ad876baf53f2758332670a138fda936} Jan 23 16:15:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:53.087247 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" event=&{ID:77321459d336b7d15305c9b9a83e4081 Type:ContainerStarted Data:7af1d488a05842a958b19b2809a3272e32a0d0be48bd76b20abdc1efb0af83fb} Jan 23 16:15:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:15:53.088124 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pbh26" event=&{ID:ff6a907c-8dc5-4524-b928-d97ba7b430c3 Type:ContainerStarted Data:986a20f53ea2279c0c1d4bf3e4ac4b02c69112ab24eb4ddd52a4a6f531d22948} Jan 23 16:15:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:53.624397968Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee\"" Jan 23 16:15:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:53.951550081Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=5dad6c9f-3c6a-4022-9f85-e698c64de695 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:54.604796694Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6 not found" id=5dad6c9f-3c6a-4022-9f85-e698c64de695 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:54.605507706Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=286efd14-3c5c-4c7a-9b35-f24bb1d633bf name=/runtime.v1.ImageService/PullImage Jan 23 16:15:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:54.652543499Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:55.220400618Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6\"" Jan 23 16:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:58.145865377Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:58.784241423Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d" id=ffaf7569-66da-4890-aba0-89ce685ce194 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:58.784242905Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52" id=b7f2569b-4570-4989-87ef-b37845e47d43 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:58.795657529Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52" id=25142bad-9a64-4d5c-819d-89dcd3c1bea0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:58.795690707Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d" id=acabab79-49a4-4a48-b941-b36595f72979 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.413054601Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=3db3f7ed-d1d2-4551-8c04-e1d09493880f name=/runtime.v1.ImageService/PullImage Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.470176351Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=bcb3ae9e-c47e-4733-84ed-f114ac144a30 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.586953172Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=4fc30815-0ffc-43b8-847c-54989b0401fb name=/runtime.v1.ImageService/PullImage Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.587172190Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=6af69a5c-1ff4-4f3b-9e3a-201f9d69a34f name=/runtime.v1.ImageService/PullImage Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.639039466Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=0d2ea3c6-c43f-40b5-83bb-5e9aad04004f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.639074896Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=60d8e7d6-a21b-4df4-be1a-9b406ffc0829 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.696978187Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:83e896edec1eb2ff032e4ed82d4a23af252e0046b6d1d040a619f9502fdff2df,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6],Size_:406136803,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bcb3ae9e-c47e-4733-84ed-f114ac144a30 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.697780285Z" level=info msg="Creating container: openshift-monitoring/node-exporter-pbh26/kube-rbac-proxy" id=5df1218a-43ab-475c-95d0-b1f9a624b60e name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.697854159Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.859524980Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12b1481af5e709ce9bed7cf5910c07d0dcee72c1e0b2fb2605d0f7f1dd5ed7b8,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9cf69b5d2fd7ddcfead1a23901c0b2b4d04aebad77094f1aeb150e1ad77bb52],Size_:425881625,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=25142bad-9a64-4d5c-819d-89dcd3c1bea0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.859662763Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=7a8f653a-a4f6-4ce2-b400-d3668cf33e71 name=/runtime.v1.ImageService/PullImage Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.859782521Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a3b0b4bc900363a9a625dabdfef75797e6e041bbc1b58c2525e0ec296e0bb4f9,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a16832e9c1f864f0f8455237987cb75061483d55d4fd2619af2f93ac3563390d],Size_:435808699,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=acabab79-49a4-4a48-b941-b36595f72979 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.860437478Z" level=info msg="Creating container: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/cluster-policy-controller" id=9e9778de-cfd2-4e2a-ac41-79f4a70cb3dc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.860497021Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.860590732Z" level=info msg="Creating container: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-readyz" id=81c00033-5406-42d1-be37-057bb13840dc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.860628925Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:15:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:15:59.910197044Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=f3b48eae-59bc-4023-b647-994e8f5b2f62 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.346137136Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=60d8e7d6-a21b-4df4-be1a-9b406ffc0829 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.347036042Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-syncer" id=01c1dfd8-9313-4cb8-8f7d-5de548cfdb7f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.347127647Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.989507325Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:390461144476b07e6d7f36c57822b21d673f6f11cfcf572e7eeb14d3898da2c5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651],Size_:429353565,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f3b48eae-59bc-4023-b647-994e8f5b2f62 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.990252537Z" level=info msg="Creating container: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-cert-syncer" id=a3f236d7-5edd-47bf-a00b-86e53f8d1359 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:00.990326146Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:01 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00091|connmgr|INFO|br-int<->unix#2: 1807 flow_mods in the 1 s starting 10 s ago (1801 adds, 6 deletes) Jan 23 16:16:02 hub-master-0.workload.bos2.lab sshd[12818]: Accepted publickey for core from 2600:52:7:18::11 port 48246 ssh2: ED25519 SHA256:51RsaYMAVDXjZ4ofvNlClwmCDL0eebyMyw8HOKcupS0 Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd[1]: Created slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd[1]: Starting User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun starting up. Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd-logind[3052]: New session 1 of user core. -- Subject: A new session 1 has been created for user core -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 1 has been created for the user core. -- -- The leading process of the session is 12818. Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd[1]: Started User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd[1]: Starting User Manager for UID 1000... -- Subject: Unit user@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun starting up. Jan 23 16:16:02 hub-master-0.workload.bos2.lab systemd[12877]: pam_unix(systemd-user:session): session opened for user core by (uid=0) Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Jan 23 16:16:03 hub-master-0.workload.bos2.lab sshd[12818]: pam_unix(sshd:session): session opened for user core by (uid=0) Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting Create User's Volatile Files and Directories... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:04 hub-master-0.workload.bos2.lab podman[12910]: time="2023-01-23T16:16:03Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:04 hub-master-0.workload.bos2.lab podman[12911]: time="2023-01-23T16:16:03Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:16:04 hub-master-0.workload.bos2.lab podman[13000]: time="2023-01-23T16:16:03Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:04 hub-master-0.workload.bos2.lab podman[12998]: time="2023-01-23T16:16:03Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Started Podman auto-update timer. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on Podman API Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Started Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Started Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[1]: Started User Manager for UID 1000. -- Subject: Unit user@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting Podman auto-update service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting Podman API Service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[1]: Started Session 1 of user core. -- Subject: Unit session-1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Starting A template for running K8s workloads via podman-play-kube... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:16:03 hub-master-0.workload.bos2.lab systemd[12877]: Started Podman API Service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.374314715Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=286efd14-3c5c-4c7a-9b35-f24bb1d633bf name=/runtime.v1.ImageService/PullImage Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.375183684Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=5a6d3da4-b24a-42cb-bdfb-744e7bcd7100 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.378633904Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" id=91bccca7-b791-4aa6-b09f-a7b09cba2f18 name=/runtime.v1.ImageService/PullImage Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.378992678Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:83e896edec1eb2ff032e4ed82d4a23af252e0046b6d1d040a619f9502fdff2df,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6],Size_:406136803,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0d2ea3c6-c43f-40b5-83bb-5e9aad04004f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.379231715Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" id=a717dffe-c550-4859-83d9-4236799942fd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.382165682Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:83e896edec1eb2ff032e4ed82d4a23af252e0046b6d1d040a619f9502fdff2df,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6],Size_:406136803,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5a6d3da4-b24a-42cb-bdfb-744e7bcd7100 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.382903708Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/kube-rbac-proxy" id=c75405a9-f040-44a4-b45b-402e7f396a1b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.382975209Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0.scope. -- Subject: Unit crio-conmon-9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.400505962Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy" id=fe744a6d-4f2a-4e85-8cb0-8d830389b98a name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.400682370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.418950009Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6bfa52d7e9b640be60b053583a2cfb52e78c3cc029d71ccb418be610665cfd29,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee],Size_:352581871,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a717dffe-c550-4859-83d9-4236799942fd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.419540371Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/bond-cni-plugin" id=75408b09-db5a-42e7-baef-13dfdc087fa1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.419613279Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9.scope. -- Subject: Unit crio-conmon-cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0. -- Subject: Unit crio-9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[12997]: Error: open default: no such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[13000]: time="2023-01-23T16:16:05Z" level=info msg="Setting parallel job count to 337" Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[13000]: time="2023-01-23T16:16:05Z" level=info msg="Using systemd socket activation to determine API endpoint" Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[13000]: time="2023-01-23T16:16:05Z" level=info msg="API service listening on \"@0006e\". URI: \"@0006e\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[13000]: time="2023-01-23T16:16:05Z" level=info msg="API service listening on \"@0006e\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[12998]: time="2023-01-23T16:16:05Z" level=info msg="Setting parallel job count to 337" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7.scope. -- Subject: Unit crio-conmon-8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab podman[13000]: Error: failed to start API service: accept unixgram @0006e: accept4: operation not supported Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9. -- Subject: Unit crio-cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7. -- Subject: Unit crio-8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e.scope. -- Subject: Unit crio-conmon-084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533.scope. -- Subject: Unit crio-conmon-e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb.scope. -- Subject: Unit crio-conmon-41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started podman-pause-b1294402.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e.scope. -- Subject: Unit crio-conmon-2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started podman-pause-f18d2838.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: podman-kube@default.service: Main process exited, code=exited, status=125/n/a Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: podman-kube@default.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Failed to start A template for running K8s workloads via podman-play-kube. -- Subject: Unit UNIT has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has failed. -- -- The result is failed. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: podman.service: Main process exited, code=exited, status=125/n/a Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: podman.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope. -- Subject: Unit crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb. -- Subject: Unit crio-41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e. -- Subject: Unit crio-084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533. -- Subject: Unit crio-e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e. -- Subject: Unit crio-2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633. -- Subject: Unit crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.566087567Z" level=info msg="Created container 9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0: openshift-monitoring/node-exporter-pbh26/kube-rbac-proxy" id=5df1218a-43ab-475c-95d0-b1f9a624b60e name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.566650590Z" level=info msg="Starting container: 9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0" id=3ad8fa40-6502-487b-85b0-fa8f0be351ba name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.573457284Z" level=info msg="Started container" PID=13155 containerID=9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0 description=openshift-monitoring/node-exporter-pbh26/kube-rbac-proxy id=3ad8fa40-6502-487b-85b0-fa8f0be351ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4376b9e0340b1b255c30c0cd7e1eca321fd1edc94cf24b4db89a98ab24c43f9 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.577696773Z" level=info msg="Created container cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/cluster-policy-controller" id=9e9778de-cfd2-4e2a-ac41-79f4a70cb3dc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.577982231Z" level=info msg="Starting container: cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9" id=1a251a26-f985-465a-868f-4bd380ca531d name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.583867332Z" level=info msg="Started container" PID=13197 containerID=cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9 description=openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/cluster-policy-controller id=1a251a26-f985-465a-868f-4bd380ca531d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.590865985Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=8f1f83d5-dc22-4707-bd35-b7bccc1ea081 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.591026861Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18 not found" id=8f1f83d5-dc22-4707-bd35-b7bccc1ea081 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.591337037Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=c17616de-d64c-4e1c-871f-8aea9788646b name=/runtime.v1.ImageService/PullImage Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.593756699Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.604095567Z" level=info msg="Created container 8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7: openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy" id=fe744a6d-4f2a-4e85-8cb0-8d830389b98a name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.604531555Z" level=info msg="Starting container: 8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7" id=c1046ba5-dca3-4096-8098-bb47000a35f7 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.607152188Z" level=info msg="Created container 084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e: openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-readyz" id=81c00033-5406-42d1-be37-057bb13840dc name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.607534027Z" level=info msg="Starting container: 084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e" id=50538c0c-f702-43ac-8d00-2ad5d4ac1213 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started podman-pause-cf0e7ce8.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.615671653Z" level=info msg="Started container" PID=13286 containerID=084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e description=openshift-etcd/etcd-hub-master-0.workload.bos2.lab/etcd-readyz id=50538c0c-f702-43ac-8d00-2ad5d4ac1213 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90651536f7b14ac7243d3410e9d48b14d1ddfe8c55c6041cc414a99a79f663ea Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started podman-pause-d01ba5fc.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.624942339Z" level=info msg="Started container" PID=13208 containerID=8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7 description=openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy id=c1046ba5-dca3-4096-8098-bb47000a35f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.632170575Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=11da8ce4-f39e-4c55-ac2d-163058cde26a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.632340787Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:83e896edec1eb2ff032e4ed82d4a23af252e0046b6d1d040a619f9502fdff2df,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6],Size_:406136803,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=11da8ce4-f39e-4c55-ac2d-163058cde26a name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.632968982Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" id=c2c0cde0-24da-41f1-96ff-7f08b15e8a05 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.633109614Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:83e896edec1eb2ff032e4ed82d4a23af252e0046b6d1d040a619f9502fdff2df,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6],Size_:406136803,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c2c0cde0-24da-41f1-96ff-7f08b15e8a05 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.633780281Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy-ovn-metrics" id=bceec042-8c26-40bb-bc5c-a12e6d226717 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.633874759Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1.scope. -- Subject: Unit crio-conmon-1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.650872417Z" level=info msg="Created container 41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-cert-syncer" id=a3f236d7-5edd-47bf-a00b-86e53f8d1359 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.651297451Z" level=info msg="Starting container: 41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb" id=a657c3e6-8198-439b-9ad5-08e8191d9202 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.430 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) added; new total=1 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.669572784Z" level=info msg="Started container" PID=13303 containerID=41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb description=openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-cert-syncer id=a657c3e6-8198-439b-9ad5-08e8191d9202 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1. -- Subject: Unit crio-1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Locked sysroot Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Initiated txn Cleanup for client(id:machine-config-operator dbus:1.430 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0): /org/projectatomic/rpmostree1/rhcos Jan 23 16:16:05 hub-master-0.workload.bos2.lab kernel: EXT4-fs (sda3): re-mounted. Opts: Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.678176068Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=5341960d-6a01-40f5-8672-7dd6089b5277 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.678335422Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:390461144476b07e6d7f36c57822b21d673f6f11cfcf572e7eeb14d3898da2c5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651],Size_:429353565,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5341960d-6a01-40f5-8672-7dd6089b5277 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.678827689Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" id=1217bead-0f95-41dc-84c0-b57fb4367864 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.678961204Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:390461144476b07e6d7f36c57822b21d673f6f11cfcf572e7eeb14d3898da2c5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651],Size_:429353565,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1217bead-0f95-41dc-84c0-b57fb4367864 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.679520514Z" level=info msg="Creating container: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-recovery-controller" id=a06bea64-babc-4cc6-8b5d-f4e09a33d4cd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.679594991Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Process [pid: 13470 uid: 0 unit: crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope] connected to transaction progress Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Txn Cleanup on /org/projectatomic/rpmostree1/rhcos successful Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Unlocked sysroot Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: Process [pid: 13470 uid: 0 unit: crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope] disconnected from transaction progress Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.430 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) vanished; remaining=0 Jan 23 16:16:05 hub-master-0.workload.bos2.lab rpm-ostree[11274]: In idle state; will auto-exit in 64 seconds Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.689990260Z" level=info msg="Created container e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-syncer" id=01c1dfd8-9313-4cb8-8f7d-5de548cfdb7f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.690428583Z" level=info msg="Starting container: e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533" id=7dd58e59-9d30-4ea1-967f-302179f8d8c9 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.701095085Z" level=info msg="Created container 33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633: openshift-multus/multus-additional-cni-plugins-7ks6h/bond-cni-plugin" id=75408b09-db5a-42e7-baef-13dfdc087fa1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.701524908Z" level=info msg="Starting container: 33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633" id=8d84369a-4ffe-4dd7-9c4e-38976087d792 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.705365538Z" level=info msg="Created container 2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e: openshift-ovn-kubernetes/ovnkube-master-fld8m/kube-rbac-proxy" id=c75405a9-f040-44a4-b45b-402e7f396a1b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.705876826Z" level=info msg="Starting container: 2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e" id=75d0c06e-2a43-4804-a1f8-bad2a849f33f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.707971564Z" level=info msg="Started container" PID=13321 containerID=33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633 description=openshift-multus/multus-additional-cni-plugins-7ks6h/bond-cni-plugin id=8d84369a-4ffe-4dd7-9c4e-38976087d792 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.708822530Z" level=info msg="Started container" PID=13297 containerID=e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533 description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-syncer id=7dd58e59-9d30-4ea1-967f-302179f8d8c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.713574135Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_31575611-f7cb-4367-ab4b-419fef8460e8\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.725404411Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.725450493Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.725469490Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bond\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.725926881Z" level=info msg="Started container" PID=13292 containerID=2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e description=openshift-ovn-kubernetes/ovnkube-master-fld8m/kube-rbac-proxy id=75d0c06e-2a43-4804-a1f8-bad2a849f33f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.729240367Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=4592e74c-42c3-4c99-8a22-d156b3c34301 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.729418482Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4592e74c-42c3-4c99-8a22-d156b3c34301 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.736635077Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.736657134Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.736670425Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_31575611-f7cb-4367-ab4b-419fef8460e8\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has successfully entered the 'dead' state. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope completed and consumed the indicated resources. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope has successfully entered the 'dead' state. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633.scope completed and consumed the indicated resources. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.745355593Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=efce2310-1208-4070-874e-8a2c84197e9d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.745520132Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=efce2310-1208-4070-874e-8a2c84197e9d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.746058813Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=b8f4ed64-5419-4884-b42e-6a983fb4f859 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.746257725Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b8f4ed64-5419-4884-b42e-6a983fb4f859 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.748223120Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/sbdb" id=fce083f1-b1dc-478d-9990-f214febf0422 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.748325606Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.750703781Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=d45afb21-f122-41f1-a54e-8179f64b89ec name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.752699332Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d45afb21-f122-41f1-a54e-8179f64b89ec name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.753191667Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-regeneration-controller" id=5932c077-cfc5-4990-b022-5b1715ad34e5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.753275071Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4.scope. -- Subject: Unit crio-conmon-4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4. -- Subject: Unit crio-4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Started Podman auto-update service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[12877]: Startup finished in 2.788s. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 1000 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 2788339 microseconds. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417.scope. -- Subject: Unit crio-conmon-950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417. -- Subject: Unit crio-950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.808159918Z" level=info msg="Created container 1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1: openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy-ovn-metrics" id=bceec042-8c26-40bb-bc5c-a12e6d226717 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.808534399Z" level=info msg="Starting container: 1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1" id=e63c54fa-4d8c-4e6c-8a35-58c8f1b43c21 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.814750215Z" level=info msg="Started container" PID=13556 containerID=1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1 description=openshift-ovn-kubernetes/ovnkube-node-897lw/kube-rbac-proxy-ovn-metrics id=e63c54fa-4d8c-4e6c-8a35-58c8f1b43c21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f.scope. -- Subject: Unit crio-conmon-85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.832786830Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=17219a1c-7c84-42d9-82a3-69d688e43868 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.832981003Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=17219a1c-7c84-42d9-82a3-69d688e43868 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.838382212Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d7eecd44-9405-4f4e-9344-fdbc75731466 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.838499810Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d7eecd44-9405-4f4e-9344-fdbc75731466 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f. -- Subject: Unit crio-85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.863508363Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18\"" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.874680334Z" level=info msg="Created container 4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4: openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-recovery-controller" id=a06bea64-babc-4cc6-8b5d-f4e09a33d4cd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.875067047Z" level=info msg="Starting container: 4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4" id=1fee1a93-f919-45d1-97ab-17b99c99fd16 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.882799857Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8895b4c6-66b2-49bd-a9be-ff55e67fac98 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.882878745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.894263335Z" level=info msg="Started container" PID=13698 containerID=4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4 description=openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab/kube-scheduler-recovery-controller id=1fee1a93-f919-45d1-97ab-17b99c99fd16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48ef7c5bfb260a60ea1a7924be2a5e6dd11739bd08faf31b4b56316126ad91b6 Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope. -- Subject: Unit crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b. -- Subject: Unit crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.920422667Z" level=info msg="Created container 950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417: openshift-ovn-kubernetes/ovnkube-master-fld8m/sbdb" id=fce083f1-b1dc-478d-9990-f214febf0422 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.920858715Z" level=info msg="Starting container: 950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417" id=4ec91bb8-5268-4aed-9153-2feaa3439b6a name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.927451709Z" level=info msg="Started container" PID=13749 containerID=950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417 description=openshift-ovn-kubernetes/ovnkube-master-fld8m/sbdb id=4ec91bb8-5268-4aed-9153-2feaa3439b6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.965006933Z" level=info msg="Created container 85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-regeneration-controller" id=5932c077-cfc5-4990-b022-5b1715ad34e5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.965526081Z" level=info msg="Starting container: 85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f" id=9c7b156f-f27e-409c-923c-a0dd38dbe687 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.972315779Z" level=info msg="Started container" PID=13786 containerID=85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-cert-regeneration-controller id=9c7b156f-f27e-409c-923c-a0dd38dbe687 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.978443035Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=bafc15af-70ec-4f92-8429-f27e7fed202c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.978608489Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bafc15af-70ec-4f92-8429-f27e7fed202c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.979220515Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=35ffc040-5bc6-4201-a40b-9a5c1ccd5bd0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.979303142Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=35ffc040-5bc6-4201-a40b-9a5c1ccd5bd0 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.980176858Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-insecure-readyz" id=c5133d1a-2c61-4135-9e84-7416ea1e6998 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.980273741Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.984889409Z" level=info msg="Created container bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8895b4c6-66b2-49bd-a9be-ff55e67fac98 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.985238155Z" level=info msg="Starting container: bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b" id=a4dc1ea3-4575-4912-b0ab-f486c4fd129f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.993257091Z" level=info msg="Started container" PID=13860 containerID=bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=a4dc1ea3-4575-4912-b0ab-f486c4fd129f name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:05.995014617Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.006027140Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.006052132Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.006063420Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.015125479Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.015145911Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.015159025Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.024196401Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.024233675Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.024250573Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1.scope. -- Subject: Unit crio-conmon-8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.034959157Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.034988206Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.035002415Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.044223747Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.044249542Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1. -- Subject: Unit crio-8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.118277 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" event=&{ID:77321459d336b7d15305c9b9a83e4081 Type:ContainerStarted Data:4199c8a76b7a0a5ac863fbc2e2bc5c3a9ebffd41599c8dd72df6bf00db9abcf4} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.118481 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" event=&{ID:77321459d336b7d15305c9b9a83e4081 Type:ContainerStarted Data:41cfdc995b6a52bef9020523a2947c28707a431f40acb9bb55e31046f9e328eb} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.118492 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.119516 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pbh26" event=&{ID:ff6a907c-8dc5-4524-b928-d97ba7b430c3 Type:ContainerStarted Data:9899141a4f1d2efcbd1b5d086a32897df8be9d093be06a75605a8773695d1ac0} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.121553 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:85f7daec446df71714123ba87bc0d5ffbf9b97c6143f18beac7d3b30223afc3f} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.121569 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:e6480f05a30bf0e4297ba87b75c9859d49fd3732923c1716d7919a71ad01e533} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.122907 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633" exitCode=0 Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.122931 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:33ad090396010c8e6f617d54da458f0a40525be42c618d45fff4149a66b91633} Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.123336036Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" id=a72aef0f-671a-40b5-a856-56111f83f32e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.123527367Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01 not found" id=a72aef0f-671a-40b5-a856-56111f83f32e name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.123965131Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" id=d9886305-a042-433b-a556-e447885f2562 name=/runtime.v1.ImageService/PullImage Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.124872783Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.125815 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" event=&{ID:38eebeadc7ddc4d42d1de9a5e4ac69f1 Type:ContainerStarted Data:084ccb5d0b7322656d161117ce90b74fd6b853364f35f801f815111d743ad59e} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.126835 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" event=&{ID:b8e918bfaafef0fc7d13026942c43171 Type:ContainerStarted Data:cb1dd76f359af413893197d33fd89e59a8b8668670bbcf92a790be87188657e9} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.128120 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.128136 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:1a78961b6c09e0ba4dc035759890bfd30243df847ce3c504adfc22ac838c3bc1} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.128144 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:8f355a72bf8b8a29da129e36992ecb11bd179d3cc46c9daa7830e5c48ac33fe7} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.128313 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.130059 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:950ceaa055798f0636444420e506298b733a3de2dcdc6f61dba67fa596b0e417} Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.130080 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:2749ec49dd13105d444a5e11dd0e584f5ca8203ab133c192010168fce4f2e72e} Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.137682644Z" level=info msg="Created container 8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-insecure-readyz" id=c5133d1a-2c61-4135-9e84-7416ea1e6998 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.138061815Z" level=info msg="Starting container: 8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1" id=db997ca6-e49b-4fed-831f-55281742dd70 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.145101505Z" level=info msg="Started container" PID=14093 containerID=8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1 description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-insecure-readyz id=db997ca6-e49b-4fed-831f-55281742dd70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.152165199Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=9dc40680-810d-4cf3-a7a1-47fcf34695a2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.152341587Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9dc40680-810d-4cf3-a7a1-47fcf34695a2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.152908539Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" id=533fdf95-6828-4ae5-a550-fedefa5735f7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.153036457Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bd1c97d64b3986aa42fdf4f53165ad0cdaea72e442eb7ba2b2648fc8fa0514a7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018],Size_:435728296,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=533fdf95-6828-4ae5-a550-fedefa5735f7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.153720391Z" level=info msg="Creating container: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-check-endpoints" id=38c89374-3238-42dd-9542-47f758b5e014 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.153793293Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:06 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.443 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) added; new total=1 Jan 23 16:16:06 hub-master-0.workload.bos2.lab rpm-ostree[11274]: client(id:machine-config-operator dbus:1.443 unit:crio-166c5c4767dc0d2ca6ea68ea916fbb84ed79ff1b53d26c94a781a9e093737da0.scope uid:0) vanished; remaining=0 Jan 23 16:16:06 hub-master-0.workload.bos2.lab rpm-ostree[11274]: In idle state; will auto-exit in 62 seconds Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f.scope. -- Subject: Unit crio-conmon-ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f. -- Subject: Unit crio-ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.325677260Z" level=info msg="Created container ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f: openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-check-endpoints" id=38c89374-3238-42dd-9542-47f758b5e014 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.326139014Z" level=info msg="Starting container: ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f" id=5c43fd90-8ac6-4c53-8d23-826b7cbb606c name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.344888827Z" level=info msg="Started container" PID=14180 containerID=ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f description=openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab/kube-apiserver-check-endpoints id=5c43fd90-8ac6-4c53-8d23-826b7cbb606c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bfa4524a38aebe37bebba8d194ade3239b35942d342cafd0acd71dbd32455c3 Jan 23 16:16:06 hub-master-0.workload.bos2.lab root[14249]: machine-config-daemon[10176]: Validated on-disk state Jan 23 16:16:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:06.384237245Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01\"" Jan 23 16:16:06 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490566.4011] device (ovn-k8s-mp0): carrier: link connected Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.446948 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:16:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:06.470263 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:16:06 hub-master-0.workload.bos2.lab conmon[13823]: conmon bd041c77ddf86d1eede8 : container 13860 exited with status 1 Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has successfully entered the 'dead' state. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope: Consumed 616ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope completed and consumed the indicated resources. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope has successfully entered the 'dead' state. Jan 23 16:16:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b.scope completed and consumed the indicated resources. Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.135527 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:ecdd90c10cb51f2e579421f2a6cdd4d680957f1066f6215c51a085a4db01948f} Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.135812 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" event=&{ID:9552ff413d8390655360ce968177c622 Type:ContainerStarted Data:8558c25ed3ca9ddc9b8ab37cfd543bcb22e4776ef3d22650d7b49b92ad7af7b1} Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.135827 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.135837 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.136626 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/174.log" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.137170 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b" exitCode=1 Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.137219 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b} Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.137803 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.138297 8631 scope.go:115] "RemoveContainer" containerID="bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.141542300Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=56946329-c43c-4e02-900f-ecc19c08d04b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.141705613Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=56946329-c43c-4e02-900f-ecc19c08d04b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.141469 8631 patch_prober.go:29] interesting pod/kube-apiserver-hub-master-0.workload.bos2.lab container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]log ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]etcd ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]etcd-readiness ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [-]api-openshift-apiserver-available failed: reason withheld Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [-]api-openshift-oauth-apiserver-available failed: reason withheld Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]informer-sync ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-deprecated-api-requests-filter ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-apiextensions-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/crd-informer-synced ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/bootstrap-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/rbac/bootstrap-roles ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-registration-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-status-available-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]autoregister-completion ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]shutdown ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: readyz check failed Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.141698 8631 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" podUID=9552ff413d8390655360ce968177c622 containerName="kube-apiserver" probeResult=failure output="HTTP probe failed with statuscode: 500" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.142312906Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=24f5ddb2-54fe-46b5-a712-dd6cf751c83c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.142466524Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=24f5ddb2-54fe-46b5-a712-dd6cf751c83c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.145324902Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9c6fb4cb-ae5e-40fc-bbe7-49e2fe266a95 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.145413411Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.156516 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.284229 8631 patch_prober.go:29] interesting pod/kube-apiserver-hub-master-0.workload.bos2.lab container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]log ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]etcd ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]etcd-readiness ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [-]api-openshift-apiserver-available failed: reason withheld Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [-]api-openshift-oauth-apiserver-available failed: reason withheld Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]informer-sync ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-deprecated-api-requests-filter ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-apiextensions-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/crd-informer-synced ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/bootstrap-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/rbac/bootstrap-roles ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-registration-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-status-available-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]autoregister-completion ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: [+]shutdown ok Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: readyz check failed Jan 23 16:16:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:07.284282 8631 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" podUID=9552ff413d8390655360ce968177c622 containerName="kube-apiserver" probeResult=failure output="HTTP probe failed with statuscode: 500" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.318972773Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" id=d9886305-a042-433b-a556-e447885f2562 name=/runtime.v1.ImageService/PullImage Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.319239537Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=c17616de-d64c-4e1c-871f-8aea9788646b name=/runtime.v1.ImageService/PullImage Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.319990828Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" id=16efa43f-41c4-473a-b754-f9c22f2b0115 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.320062342Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=b61a7dc6-4ee4-440b-a3b9-863143e4b8e7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.320822952Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e0216c844cca79631a684d908cd44fa6618c9184e1a451fb30013cfb23d51d16,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01],Size_:317087596,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=16efa43f-41c4-473a-b754-f9c22f2b0115 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.321008735Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13da15fe6e03257c9cc571b3a2f61d1a19b65a8cdbb785126b24e486ceec5084,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18],Size_:431716603,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b61a7dc6-4ee4-440b-a3b9-863143e4b8e7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.321404330Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/routeoverride-cni" id=cadc6e5c-a46c-423d-bc7b-cbb20961b816 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.321472134Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.321504695Z" level=info msg="Creating container: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-cert-syncer" id=538fb252-58a4-478c-917d-18df84ae3dbd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.321564084Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope. -- Subject: Unit crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6. -- Subject: Unit crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope. -- Subject: Unit crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7.scope. -- Subject: Unit crio-conmon-7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac. -- Subject: Unit crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7. -- Subject: Unit crio-7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.430138370Z" level=info msg="Created container 4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9c6fb4cb-ae5e-40fc-bbe7-49e2fe266a95 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.430617774Z" level=info msg="Starting container: 4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" id=e24bfa6e-e933-4ee3-9927-e080d46a0c03 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.450527573Z" level=info msg="Started container" PID=14426 containerID=4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=e24bfa6e-e933-4ee3-9927-e080d46a0c03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.454768229Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.465031297Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.465056275Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.465071958Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.474369343Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.474391771Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.474403894Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.484089651Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.484109431Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.484121616Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.492645464Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.492680897Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.492703277Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.503053948Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.503071746Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.527310140Z" level=info msg="Created container 4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac: openshift-multus/multus-additional-cni-plugins-7ks6h/routeoverride-cni" id=cadc6e5c-a46c-423d-bc7b-cbb20961b816 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.527769388Z" level=info msg="Starting container: 4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac" id=35bf6dbf-2666-49ff-999e-27c8bef2eb83 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.543008572Z" level=info msg="Created container 7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-cert-syncer" id=538fb252-58a4-478c-917d-18df84ae3dbd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.543498654Z" level=info msg="Starting container: 7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7" id=fc6773f7-5b0e-46e1-8601-205bb7ad490b name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.545669464Z" level=info msg="Started container" PID=14463 containerID=4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac description=openshift-multus/multus-additional-cni-plugins-7ks6h/routeoverride-cni id=35bf6dbf-2666-49ff-999e-27c8bef2eb83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.550592541Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_a1df0b84-e448-4b29-8f7c-5b616aebd77d\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.560730937Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.560759148Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.560775226Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/route-override\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.562088131Z" level=info msg="Started container" PID=14476 containerID=7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7 description=openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-cert-syncer id=fc6773f7-5b0e-46e1-8601-205bb7ad490b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.584887604Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=4866febb-b7a2-4f6d-9588-575f06ce5a04 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.584966802Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.584987874Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.585001405Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_a1df0b84-e448-4b29-8f7c-5b616aebd77d\"" Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.586075768Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13da15fe6e03257c9cc571b3a2f61d1a19b65a8cdbb785126b24e486ceec5084,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18],Size_:431716603,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4866febb-b7a2-4f6d-9588-575f06ce5a04 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has successfully entered the 'dead' state. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope completed and consumed the indicated resources. Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.586607601Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" id=6b799c0e-06f3-4e36-a6c3-cc479c14e0e3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope has successfully entered the 'dead' state. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope: Consumed 36ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac.scope completed and consumed the indicated resources. Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.587916779Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13da15fe6e03257c9cc571b3a2f61d1a19b65a8cdbb785126b24e486ceec5084,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18],Size_:431716603,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6b799c0e-06f3-4e36-a6c3-cc479c14e0e3 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.588578215Z" level=info msg="Creating container: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-recovery-controller" id=01ee83a5-0a92-4ca5-969b-be58abfe12e7 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.588646771Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1.scope. -- Subject: Unit crio-conmon-2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1. -- Subject: Unit crio-2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:07 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.754956736Z" level=info msg="Created container 2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1: openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-recovery-controller" id=01ee83a5-0a92-4ca5-969b-be58abfe12e7 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.755551488Z" level=info msg="Starting container: 2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1" id=24974c30-9ac3-45ef-9c2e-1a86a8d84e13 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:07.773566665Z" level=info msg="Started container" PID=14697 containerID=2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1 description=openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab/kube-controller-manager-recovery-controller id=24974c30-9ac3-45ef-9c2e-1a86a8d84e13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6fa4d1caf147a8114e65008795c0bbb1312199a6815b7e0b11e2d3c24761462 Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.039584630Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=b22f0bd8-e1e6-45f1-881a-1a74eb94476c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.039753104Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b22f0bd8-e1e6-45f1-881a-1a74eb94476c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.040488593Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=39a277e4-eedd-43ef-a412-44c820e54132 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.040586499Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=39a277e4-eedd-43ef-a412-44c820e54132 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.041365910Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/ovnkube-master" id=254b948b-8837-4224-83fc-43125f3f3957 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.041450286Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08.scope. -- Subject: Unit crio-conmon-6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08. -- Subject: Unit crio-6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:08 hub-master-0.workload.bos2.lab conmon[14414]: conmon 4857428dfee18a4920f2 : container 14426 exited with status 1 Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has successfully entered the 'dead' state. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope: Consumed 591ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope completed and consumed the indicated resources. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope has successfully entered the 'dead' state. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope: Consumed 58ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6.scope completed and consumed the indicated resources. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.140339 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/175.log" Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.140950 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/174.log" Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.141733 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" exitCode=1 Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.141769 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6} Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.141789 8631 scope.go:115] "RemoveContainer" containerID="bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b" Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.142402233Z" level=info msg="Removing container: bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b" id=31c573fd-7d6a-41e3-b709-5cd8b75bbe40 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.142607 8631 scope.go:115] "RemoveContainer" containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:08.143145 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.143834 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac" exitCode=0 Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.143897 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:4a5c82d1728d75f74affe571f39f3b606a28057b927a2edab94ae4be54bc38ac} Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.144311548Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=fe3b99ac-c96c-461c-a56d-b43fa00d7833 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.144477345Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365 not found" id=fe3b99ac-c96c-461c-a56d-b43fa00d7833 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.144701595Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=f674e0fe-9266-4c9f-a910-27fee648bb7a name=/runtime.v1.ImageService/PullImage Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.145045 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" event=&{ID:b8e918bfaafef0fc7d13026942c43171 Type:ContainerStarted Data:2dc0e0ecbd2bbcef87711e35e0d4aa63cca1481c97568db7ecb697917ea2bbb1} Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.145073 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" event=&{ID:b8e918bfaafef0fc7d13026942c43171 Type:ContainerStarted Data:7d5a0c223bb9daf45591659bbaf58ae0a3837392d4d0ef5fe02019a2e4ae71c7} Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.145716632Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365\"" Jan 23 16:16:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:08.148340 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.183814561Z" level=info msg="Removed container bd041c77ddf86d1eede8f0306f53fbc43186b49b86d544a5e8e22a5b661f4f4b: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=31c573fd-7d6a-41e3-b709-5cd8b75bbe40 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.203990087Z" level=info msg="Created container 6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08: openshift-ovn-kubernetes/ovnkube-master-fld8m/ovnkube-master" id=254b948b-8837-4224-83fc-43125f3f3957 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.204482595Z" level=info msg="Starting container: 6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08" id=10f516fd-138e-4137-a214-57d002c38a51 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.210241643Z" level=info msg="Started container" PID=14840 containerID=6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08 description=openshift-ovn-kubernetes/ovnkube-master-fld8m/ovnkube-master id=10f516fd-138e-4137-a214-57d002c38a51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.228121463Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d5833df8-57a0-44a4-adc6-009f17ffa8e1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.228250866Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d5833df8-57a0-44a4-adc6-009f17ffa8e1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.228762531Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=b6fd2636-3674-43b5-807b-ecebf6d87b2c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.228906254Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b6fd2636-3674-43b5-807b-ecebf6d87b2c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.229642280Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-master-fld8m/ovn-dbchecker" id=b6b65a6b-3603-4b39-8e6e-843bc1aeac3f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.229712993Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0.scope. -- Subject: Unit crio-conmon-02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0. -- Subject: Unit crio-02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.400566523Z" level=info msg="Created container 02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0: openshift-ovn-kubernetes/ovnkube-master-fld8m/ovn-dbchecker" id=b6b65a6b-3603-4b39-8e6e-843bc1aeac3f name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.400968060Z" level=info msg="Starting container: 02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0" id=e2f3fb38-26a3-4039-932c-0eb300b2ff7e name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:08 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-b72aae94df09091b35970c5934128748da4a18f90487b371bc575353df261050-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-b72aae94df09091b35970c5934128748da4a18f90487b371bc575353df261050-merged.mount has successfully entered the 'dead' state. Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.407882656Z" level=info msg="Started container" PID=14927 containerID=02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0 description=openshift-ovn-kubernetes/ovnkube-master-fld8m/ovn-dbchecker id=e2f3fb38-26a3-4039-932c-0eb300b2ff7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f170255b6d8f1c25c2b2389fa822c6245de4e17660dd9254d6d1558462f4fde7 Jan 23 16:16:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:08.413438847Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365\"" Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:09.149578 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:02f09c453630478cbc0057dbe083242760f7f83813cdeadda51bb38e3e7890f0} Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:09.149601 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" event=&{ID:a88a1018-cc7c-4bd1-b3d2-0d960b53459c Type:ContainerStarted Data:6cc0866e62e5f0da09693fa4272140aa57ad09973db40ed26996caabf7082f08} Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:09.149745 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:09.151116 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/175.log" Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:09.153354 8631 scope.go:115] "RemoveContainer" containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" Jan 23 16:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:09.153826 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:10.153936 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.553098780Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=f674e0fe-9266-4c9f-a910-27fee648bb7a name=/runtime.v1.ImageService/PullImage Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.554012634Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=2a7a2c09-c87a-46ac-8e50-c78465f3292b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.554777300Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ac15f90c4d838b27568670bf9b0102c88856672018545301e99928b9123841b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365],Size_:476488546,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2a7a2c09-c87a-46ac-8e50-c78465f3292b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.555268543Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni-bincopy" id=97785ae6-3610-49ea-91fe-7eb58b70bb86 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.555356971Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope. -- Subject: Unit crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f. -- Subject: Unit crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:10.668524 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:10.669675 8631 scope.go:115] "RemoveContainer" containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" Jan 23 16:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:10.670187 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.717931471Z" level=info msg="Created container a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f: openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni-bincopy" id=97785ae6-3610-49ea-91fe-7eb58b70bb86 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.718546496Z" level=info msg="Starting container: a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f" id=73843fc8-807c-493f-8b92-01553491a5f2 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.736303389Z" level=info msg="Started container" PID=15051 containerID=a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f description=openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni-bincopy id=73843fc8-807c-493f-8b92-01553491a5f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.739864974Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_3fa49af1-ce68-4f32-ae2e-5cb3b4154c1a\"" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.750021315Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.750041688Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.759071567Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/whereabouts\"" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.768046404Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.768065498Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:10.768077087Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_3fa49af1-ce68-4f32-ae2e-5cb3b4154c1a\"" Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has successfully entered the 'dead' state. Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope: Consumed 45ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope completed and consumed the indicated resources. Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope has successfully entered the 'dead' state. Jan 23 16:16:10 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f.scope completed and consumed the indicated resources. Jan 23 16:16:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:11.158276 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f" exitCode=0 Jan 23 16:16:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:11.158394 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:a24efdda581e749d14cf8e6f6ecaac6ddd7f804060b03676b38e2560a263cb4f} Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.158925940Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=3e8d5bfb-3918-482d-9a6b-9572091840cb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.159846619Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ac15f90c4d838b27568670bf9b0102c88856672018545301e99928b9123841b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365],Size_:476488546,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3e8d5bfb-3918-482d-9a6b-9572091840cb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.160357150Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" id=4c78d589-c44e-4738-910b-ae3a4020eef2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.161156009Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ac15f90c4d838b27568670bf9b0102c88856672018545301e99928b9123841b0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365],Size_:476488546,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4c78d589-c44e-4738-910b-ae3a4020eef2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.162146515Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni" id=2c263a12-bab7-4040-bff5-3e948c0914bd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.162227559Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope. -- Subject: Unit crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533. -- Subject: Unit crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: Couldn't stat device /dev/char/10:200: No such file or directory Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.315368715Z" level=info msg="Created container 4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533: openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni" id=2c263a12-bab7-4040-bff5-3e948c0914bd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.315765155Z" level=info msg="Starting container: 4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533" id=eed3a908-8d4c-41ac-87ad-e5684e32c1f0 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:11.333463523Z" level=info msg="Started container" PID=15160 containerID=4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533 description=openshift-multus/multus-additional-cni-plugins-7ks6h/whereabouts-cni id=eed3a908-8d4c-41ac-87ad-e5684e32c1f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has successfully entered the 'dead' state. Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope completed and consumed the indicated resources. Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope has successfully entered the 'dead' state. Jan 23 16:16:11 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope: Consumed 30ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533.scope completed and consumed the indicated resources. Jan 23 16:16:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:12.162171 8631 generic.go:296] "Generic (PLEG): container finished" podID=94cb9be9-32f4-413c-9fdf-a6e9307ff410 containerID="4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533" exitCode=0 Jan 23 16:16:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:12.162201 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerDied Data:4e35ddfcbeeedafe506359aca3e457cec239204d6695e32efd6cec4d8686a533} Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.162789214Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=574bf66e-0e82-4114-b09c-e142508a9a35 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.162968392Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=574bf66e-0e82-4114-b09c-e142508a9a35 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.163418004Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=5431315f-34f9-4724-808c-23a3e92ac6ac name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.163572104Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5431315f-34f9-4724-808c-23a3e92ac6ac name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.166628679Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-7ks6h/kube-multus-additional-cni-plugins" id=8809182b-46e3-4432-9c57-7384c5cfbc31 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.166721668Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:12 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3.scope. -- Subject: Unit crio-conmon-3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:12 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3. -- Subject: Unit crio-3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.315770964Z" level=info msg="Created container 3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3: openshift-multus/multus-additional-cni-plugins-7ks6h/kube-multus-additional-cni-plugins" id=8809182b-46e3-4432-9c57-7384c5cfbc31 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.316327891Z" level=info msg="Starting container: 3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3" id=d866b05d-0fa7-42a4-bc3f-060a3861e84f name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:12.336094437Z" level=info msg="Started container" PID=15300 containerID=3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3 description=openshift-multus/multus-additional-cni-plugins-7ks6h/kube-multus-additional-cni-plugins id=d866b05d-0fa7-42a4-bc3f-060a3861e84f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd22403af0998109c47ad84503ae9773463b1f4015fc84cdb9c548d8fe02ed7b Jan 23 16:16:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:13.166877 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7ks6h" event=&{ID:94cb9be9-32f4-413c-9fdf-a6e9307ff410 Type:ContainerStarted Data:3790932704c85828ec0f76596fe5627824233f89ca262bbc33b37fcad70ab6a3} Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.206752146Z" level=info msg="NetworkStart: stopping network for sandbox a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.207246102Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/de00ae5b-2a59-4ecb-8575-5898873367d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.207273352Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.207280720Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.207287677Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.236097958Z" level=info msg="NetworkStart: stopping network for sandbox 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.236229905Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/8e70d917-e8b5-492e-b5a2-c4744138f447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.236254631Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.236264325Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.236271378Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.253587911Z" level=info msg="NetworkStart: stopping network for sandbox d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.253723400Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/a7a7cc4b-893e-4643-a4ed-f76ed127bae8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.253744359Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.253753005Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.253759600Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.278791659Z" level=info msg="NetworkStart: stopping network for sandbox da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.278897773Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/02773faa-a579-4c16-9438-c3e56af30922 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.278921782Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.278929007Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.278935624Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.288028123Z" level=info msg="NetworkStart: stopping network for sandbox 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.288143714Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c097c349-f4ba-4cf6-8497-d044db4d9cd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.288167392Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.288174743Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.288181725Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.328648808Z" level=info msg="NetworkStart: stopping network for sandbox 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.328754586Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/280f54b0-a390-4ad3-b63a-99c83abd7c76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.328774021Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.328780719Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.328786622Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.368806942Z" level=info msg="NetworkStart: stopping network for sandbox 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.368908387Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/27823c6e-462f-4dbc-898d-c1a8eb118472 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.368928929Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.368935376Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.368941390Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.377674759Z" level=info msg="NetworkStart: stopping network for sandbox e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.377792928Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d5e84b71-e25e-4483-af5e-644be83e25d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.377823348Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.377831312Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.377837666Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.387890021Z" level=info msg="NetworkStart: stopping network for sandbox 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.388000465Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/a7e0c2fc-8546-4204-bc65-61a2a91b42a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.388019286Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.388025899Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.388031465Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.526335153Z" level=info msg="NetworkStart: stopping network for sandbox 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.526471723Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/087347b4-6e39-42b1-8aba-1e49de29f9da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.526497620Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.526506374Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.526513984Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.583885394Z" level=info msg="NetworkStart: stopping network for sandbox d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.583994250Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/8448acfc-4457-48bb-a909-a4c132ec8212 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.584013934Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.584021453Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.584029183Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.591815572Z" level=info msg="NetworkStart: stopping network for sandbox a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.591920480Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e5601c10-aed1-4911-b3b4-1c538160a0ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.591939211Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.591945394Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:14.591950828Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:15.659384 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:15.662337 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:15.755652 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:16.172992 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:16.175767 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:17.183744 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:17.186574 8631 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:18.180185 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" Jan 23 16:16:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:22.996531 8631 scope.go:115] "RemoveContainer" containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.997294285Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=df93f974-2091-439f-979c-26d33daa7a46 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.997431392Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=df93f974-2091-439f-979c-26d33daa7a46 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.997890885Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=b4a27689-429a-4530-beca-15b1fe69db8f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.998052840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b4a27689-429a-4530-beca-15b1fe69db8f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.999016690Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=be78ab0b-f366-45e9-92d0-393500ee5128 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:22.999091136Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope. -- Subject: Unit crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14. -- Subject: Unit crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.115117609Z" level=info msg="Created container 391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=be78ab0b-f366-45e9-92d0-393500ee5128 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.115509023Z" level=info msg="Starting container: 391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" id=433a1c79-4f5a-4cc7-8acf-6e17685dd851 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.134112804Z" level=info msg="Started container" PID=15710 containerID=391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=433a1c79-4f5a-4cc7-8acf-6e17685dd851 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.138610391Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.148502975Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.148521507Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.148533031Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.157668764Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.157686798Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.157697826Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.168202401Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.168224865Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.168235220Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.176807044Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.176822857Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.176833590Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.185302658Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:23.185320183Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:23.187624 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/175.log" Jan 23 16:16:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:23.188114 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14} Jan 23 16:16:23 hub-master-0.workload.bos2.lab conmon[15698]: conmon 391d4b963b294ae6f490 : container 15710 exited with status 1 Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has successfully entered the 'dead' state. Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope: Consumed 565ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope completed and consumed the indicated resources. Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope has successfully entered the 'dead' state. Jan 23 16:16:23 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope: Consumed 46ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14.scope completed and consumed the indicated resources. Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.191371 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/176.log" Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.191920 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/175.log" Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.193505 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" exitCode=1 Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.193530 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14} Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.193548 8631 scope.go:115] "RemoveContainer" containerID="4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.194482 8631 scope.go:115] "RemoveContainer" containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" Jan 23 16:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:24.194504317Z" level=info msg="Removing container: 4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6" id=27d47802-622b-4427-960e-ce4301abbee9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:24.195029 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:24 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-7cb4e7d0fffc24c1c7285c0da169aef51d176aeb9893eabd075e16ed993a7bab-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7cb4e7d0fffc24c1c7285c0da169aef51d176aeb9893eabd075e16ed993a7bab-merged.mount has successfully entered the 'dead' state. Jan 23 16:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:24.237267430Z" level=info msg="Removed container 4857428dfee18a4920f2649a2bb02f7c7b5e60d0da3745e5c7663e90a9e95cc6: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=27d47802-622b-4427-960e-ce4301abbee9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:24.641241 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" Jan 23 16:16:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:25.196999 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/176.log" Jan 23 16:16:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:25.667735 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:16:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:25.668557 8631 scope.go:115] "RemoveContainer" containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" Jan 23 16:16:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:25.669035 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854921 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854948 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854957 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854965 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854976 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854986 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:27.854994 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:16:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:28.145974627Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.590499854Z" level=info msg="NetworkStart: stopping network for sandbox dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.590678131Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/62a4f480-5302-4186-9243-131e0e30c82c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.590702101Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.590709688Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.590717465Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.594042397Z" level=info msg="NetworkStart: stopping network for sandbox c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.594144551Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/50cbaa03-656f-44ae-a9fa-14729f05674c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.594163653Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.594169770Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.594175978Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.600196236Z" level=info msg="NetworkStart: stopping network for sandbox b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.600350633Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/10f72223-299a-4e15-833e-6ef03c2ba59a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.600374438Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.600383456Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.600391017Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.602264513Z" level=info msg="NetworkStart: stopping network for sandbox 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.602401801Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a1984ae3-ce69-4aa4-a067-829ad707085e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.602426963Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.602434966Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.602441123Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.850340391Z" level=info msg="NetworkStart: stopping network for sandbox 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.850454939Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/14072687-8a39-43f4-aeda-907065e5f3a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.850475011Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.850481629Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:16:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:35.850488396Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:16:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:36.997034 8631 scope.go:115] "RemoveContainer" containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" Jan 23 16:16:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:36.997686 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:37.602317 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" Jan 23 16:16:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:51.996507 8631 scope.go:115] "RemoveContainer" containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.997296989Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=6cdca596-15e9-49d0-9538-cfc70cbc97c1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.997430424Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6cdca596-15e9-49d0-9538-cfc70cbc97c1 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.997984087Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=e7988013-54e9-46b3-beba-453c43bd0421 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.998161927Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e7988013-54e9-46b3-beba-453c43bd0421 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.998975450Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=18358686-80ee-4034-9833-29a399319549 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:51.999058928Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope. -- Subject: Unit crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61. -- Subject: Unit crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.123979883Z" level=info msg="Created container dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=18358686-80ee-4034-9833-29a399319549 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.124505182Z" level=info msg="Starting container: dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" id=47e62db7-ded3-4d5c-be88-298be928650a name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.144140064Z" level=info msg="Started container" PID=16701 containerID=dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=47e62db7-ded3-4d5c-be88-298be928650a name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.148484462Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.159155163Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.159178292Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.159192088Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.168301139Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.168323991Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.168338996Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.177177503Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.177193954Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.177203646Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.185218595Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:16:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:52.185237874Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:16:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:52.250995 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/176.log" Jan 23 16:16:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:52.252251 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61} Jan 23 16:16:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:52.252511 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:16:52 hub-master-0.workload.bos2.lab conmon[16676]: conmon dd8e3c10002f1232ffc1 : container 16701 exited with status 1 Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has successfully entered the 'dead' state. Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope: Consumed 580ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope completed and consumed the indicated resources. Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope has successfully entered the 'dead' state. Jan 23 16:16:52 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope: Consumed 64ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61.scope completed and consumed the indicated resources. Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.255841 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/177.log" Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.256192 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/176.log" Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.257553 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" exitCode=1 Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.257576 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61} Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.257595 8631 scope.go:115] "RemoveContainer" containerID="391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" Jan 23 16:16:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:53.258387201Z" level=info msg="Removing container: 391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14" id=28eb708f-34a9-4865-a7c2-22d4934db614 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:53.258493 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:16:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:53.259002 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:53 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-4993e7d7e813ea9af72f1c121e4bd4e1586b01dc35cba120bc124868c86e091d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4993e7d7e813ea9af72f1c121e4bd4e1586b01dc35cba120bc124868c86e091d-merged.mount has successfully entered the 'dead' state. Jan 23 16:16:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:53.291357510Z" level=info msg="Removed container 391d4b963b294ae6f490d0f443790b6f2470fc565fbb688753106a12d7438d14: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=28eb708f-34a9-4865-a7c2-22d4934db614 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:16:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:54.260930 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/177.log" Jan 23 16:16:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:16:54.262806 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:16:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:54.263331 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:16:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:58.149329668Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.218492431Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.218540841Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount has successfully entered the 'dead' state. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount completed and consumed the indicated resources. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount has successfully entered the 'dead' state. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount completed and consumed the indicated resources. Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.246129555Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.246155968Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount has successfully entered the 'dead' state. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount completed and consumed the indicated resources. Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.264691128Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.264726334Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount has successfully entered the 'dead' state. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount completed and consumed the indicated resources. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount has successfully entered the 'dead' state. Jan 23 16:16:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount completed and consumed the indicated resources. Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.289693423Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.289725126Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.298263359Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.298295799Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.308326315Z" level=info msg="runSandbox: deleting pod ID a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999 from idIndex" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.308356919Z" level=info msg="runSandbox: removing pod sandbox a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.308374055Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.308389646Z" level=info msg="runSandbox: unmounting shmPath for sandbox a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.316375152Z" level=info msg="runSandbox: removing pod sandbox from storage: a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.325547291Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.325574916Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bf1e548d-517a-4cfe-8f41-ac9c05f165e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.325809 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.325960 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.325986 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.326040 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.332294475Z" level=info msg="runSandbox: deleting pod ID 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285 from idIndex" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.332325767Z" level=info msg="runSandbox: removing pod sandbox 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.332339598Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.332355009Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.338107934Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.338138228Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.345302754Z" level=info msg="runSandbox: deleting pod ID d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03 from idIndex" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.345328840Z" level=info msg="runSandbox: removing pod sandbox d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.345342806Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.345354732Z" level=info msg="runSandbox: unmounting shmPath for sandbox d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.352297975Z" level=info msg="runSandbox: removing pod sandbox from storage: 0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.360327930Z" level=info msg="runSandbox: removing pod sandbox from storage: d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.364495928Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.364514784Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=14111b97-8ccd-4e74-813d-bf7653dbc2e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.364728 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.364765 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.364789 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.364832 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.372297822Z" level=info msg="runSandbox: deleting pod ID da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728 from idIndex" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.372337011Z" level=info msg="runSandbox: removing pod sandbox da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.372350365Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.372363160Z" level=info msg="runSandbox: unmounting shmPath for sandbox da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.376497987Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.376517397Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=18f1b1a7-ad57-4d90-b613-271f733b1a94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.376744 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.376787 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.376810 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.376852 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.378454123Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.378483118Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.384297179Z" level=info msg="runSandbox: removing pod sandbox from storage: da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.384312133Z" level=info msg="runSandbox: deleting pod ID 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5 from idIndex" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.384406166Z" level=info msg="runSandbox: removing pod sandbox 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.384418985Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.384432830Z" level=info msg="runSandbox: unmounting shmPath for sandbox 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.387612383Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.387640457Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.394541298Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.394561590Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e4cf0dc3-6279-4813-9e14-d9bc4e1da4e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.394761 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.394796 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.394818 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.394859 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.398688723Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.398722605Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.399276372Z" level=info msg="runSandbox: removing pod sandbox from storage: 36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.418504035Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.418525179Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=58f97a81-bee2-4ce2-8086-c9a68809626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.418747 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.418787 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.418813 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.418860 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.430295664Z" level=info msg="runSandbox: deleting pod ID 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5 from idIndex" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.430321059Z" level=info msg="runSandbox: removing pod sandbox 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.430333619Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.430347484Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.438317725Z" level=info msg="runSandbox: removing pod sandbox from storage: 1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.446541079Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.446562729Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=8d08aeb2-9377-45d8-bae3-b3f800dbe8db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.446786 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.446822 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.446849 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.446899 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.470290106Z" level=info msg="runSandbox: deleting pod ID 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae from idIndex" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.470315067Z" level=info msg="runSandbox: removing pod sandbox 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.470327959Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.470339762Z" level=info msg="runSandbox: unmounting shmPath for sandbox 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.478285076Z" level=info msg="runSandbox: deleting pod ID e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9 from idIndex" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.478308539Z" level=info msg="runSandbox: removing pod sandbox e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.478323080Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.478336022Z" level=info msg="runSandbox: unmounting shmPath for sandbox e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.482289098Z" level=info msg="runSandbox: removing pod sandbox from storage: 72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.486279045Z" level=info msg="runSandbox: removing pod sandbox from storage: e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.494503899Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.494523853Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=74a600c4-346b-4e31-825c-2d1f9b982b19 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.494675 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.494709 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.494730 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.494770 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.495256072Z" level=info msg="runSandbox: deleting pod ID 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc from idIndex" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.495281369Z" level=info msg="runSandbox: removing pod sandbox 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.495294692Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.495307277Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.510501098Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.510521257Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=ba1c340f-b8a1-422f-8f49-a8194fc160c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.510724 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.510759 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.510784 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.510831 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.511333738Z" level=info msg="runSandbox: removing pod sandbox from storage: 4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.522485351Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.522505162Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=fd74d785-cc1a-4578-b7b8-d7859fa7f52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.522675 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.522712 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.522733 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.522775 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.535557551Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.535589486Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.594083525Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.594115317Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.601099247Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.601126993Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.622283489Z" level=info msg="runSandbox: deleting pod ID 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02 from idIndex" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.622310504Z" level=info msg="runSandbox: removing pod sandbox 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.622323984Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.622337974Z" level=info msg="runSandbox: unmounting shmPath for sandbox 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.635326808Z" level=info msg="runSandbox: removing pod sandbox from storage: 40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.644519083Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.644542378Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=6d74a3d4-1006-4dd3-a3b9-7f4bbff62ba4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.644756 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.644788 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.644808 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.644843 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.673282080Z" level=info msg="runSandbox: deleting pod ID a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db from idIndex" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.673309340Z" level=info msg="runSandbox: removing pod sandbox a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.673323387Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.673335924Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.685295477Z" level=info msg="runSandbox: deleting pod ID d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114 from idIndex" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.685324355Z" level=info msg="runSandbox: removing pod sandbox d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.685337873Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.685349314Z" level=info msg="runSandbox: unmounting shmPath for sandbox d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.685297089Z" level=info msg="runSandbox: removing pod sandbox from storage: a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.701294776Z" level=info msg="runSandbox: removing pod sandbox from storage: d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.701491836Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.701511559Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=1e5cc9b0-aa4a-4b20-8a3a-f4739631db6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.701714 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.701746 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.701775 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.701830 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.713488863Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:16:59.713509064Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=14ee98f0-2e56-4df2-aca0-993888a84fa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.713734 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.713775 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.713802 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:16:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:16:59.713866 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e5601c10\x2daed1\x2d4911\x2db3b4\x2d1c538160a0ba.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a7c2b926969a8c66fa0a84e6035dde4ede954693015858a3962870224b3169db-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8448acfc\x2d4457\x2d48bb\x2da909\x2da4c132ec8212.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d84b8663838b19e9ed76c2fa07b899e972814e1c00f0c508480030f25cd67114-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-087347b4\x2d6e39\x2d42b1\x2d8aba\x2d1e49de29f9da.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-40f8217526b273b1186b4429083e12366f0f88c49b332094032e9d7e5fcbcd02-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7e0c2fc\x2d8546\x2d4204\x2dbc65\x2d61a2a91b42a4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4aea4fa3677bdf4d82da60c547bb07c78007fa195a8c5b30f869a34e0339d3dc-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d5e84b71\x2de25e\x2d4483\x2daf5e\x2d644be83e25d4.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e0eeb73315f8dd21896b293047a885c4418eb28933530a8e7e1ec360330cbbc9-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-27823c6e\x2d462f\x2d4dbc\x2d898d\x2dc1a8eb118472.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-72fa4d2e6ae8067ea5ae858d76c247602f334e2fd24cc522eca1cb81f30399ae-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-280f54b0\x2da390\x2d4ad3\x2db63a\x2d99c83abd7c76.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1c118e098fd5565d59eead4d93d94db20dae007b836eff96a88e9030900a68b5-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c097c349\x2df4ba\x2d4cf6\x2d8497\x2dd044db4d9cd8.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-36854b8e82bcdc37440d93369f923e4817da4023a72cec1669d3cb27f890dcf5-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-02773faa\x2da579\x2d4c16\x2d9438\x2dc3e56af30922.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da7ff1c301a26b1d159dee23e6f39cf0171dd72adb3c48792b70c45fd6475728-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7a7cc4b\x2d893e\x2d4643\x2da4ed\x2df76ed127bae8.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d1ef028ccbebeed298dd40e33fa568c2be57c742824aaf7eef9489eaf944fa03-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8e70d917\x2de8b5\x2d492e\x2db5a2\x2dc4744138f447.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0239407e7b2a1037dffd3acea5e1e809f070ce6a8318269567774472fe6be285-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de00ae5b\x2d2a59\x2d4ecb\x2d8575\x2d5898873367d0.mount completed and consumed the indicated resources. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a3721bb99e4e6b610166b82ed143292bca559c22e3a8de50c0ed93db7b90a999-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:01 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00092|connmgr|INFO|br-int<->unix#2: 67 flow_mods in the 30 s starting 54 s ago (37 adds, 30 deletes) Jan 23 16:17:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:06.997182 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:17:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:06.997707 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:17:08 hub-master-0.workload.bos2.lab systemd[1]: rpm-ostreed.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit rpm-ostreed.service has successfully entered the 'dead' state. Jan 23 16:17:08 hub-master-0.workload.bos2.lab systemd[1]: rpm-ostreed.service: Consumed 113ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit rpm-ostreed.service completed and consumed the indicated resources. Jan 23 16:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:09.996671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:09.996840 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:09.997004919Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:09.997263445Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:09.997151588Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:09.997449701Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.013191318Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/1561b91c-eded-4df6-a169-9268b689cc1d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.013223269Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.013961111Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/60d90c73-6437-4569-8a5b-c4e75a9d05b7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.013982524Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:10.996285 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.996867855Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:10.996904570Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:11.008591926Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/967ec2f3-b32c-47de-8db1-17b29b2c7e99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:11.008611992Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:11.995495 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:17:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:11.995884613Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:11.995937931Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:12.006830773Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7632f6b2-0c70-437b-a319-5cd4338fad6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:12.006854835Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:12.996273 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:12.996627868Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:12.996665210Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.006747115Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5c222525-139f-4b43-a0aa-ff6b8a50602c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.006767640Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996211 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996539286Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996558 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996574 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996581 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996591497Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996787 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996810 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996881866Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996912037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996971342Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.996998132Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:13.996972 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997157074Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997200158Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997254633Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997272837Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997316494Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997333140Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997320746Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:13.997283577Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.025661922Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/345d67b6-b938-44ed-b9d6-4573b7e2960c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.025688662Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.031354478Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/72c28500-edd6-4b68-9b49-891cfac62f9c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.031377920Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.032186336Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/96122fcb-9ebd-4da5-b4c2-8646901b9ab3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.032214883Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.036273308Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7b1ae578-6894-4e2b-8844-7257f2969fba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.036298435Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.041503830Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8f9e42d1-bbb1-44d1-986d-393b4a3ec65e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.041524643Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.042038989Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/21f7aefc-3661-45bc-a3b0-88c4a031f86d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.042062712Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.043268347Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/29ecbe4e-16ce-4d10-9ed5-4ce31a234d53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:14.043294410Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.602940885Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.603179451Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.604641265Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.604674027Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount has successfully entered the 'dead' state. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount completed and consumed the indicated resources. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount has successfully entered the 'dead' state. Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.612363235Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.612409796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount completed and consumed the indicated resources. Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.612757844Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.612789134Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount has successfully entered the 'dead' state. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount completed and consumed the indicated resources. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount has successfully entered the 'dead' state. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount completed and consumed the indicated resources. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount has successfully entered the 'dead' state. Jan 23 16:17:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount completed and consumed the indicated resources. Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.680284495Z" level=info msg="runSandbox: deleting pod ID dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d from idIndex" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.680312881Z" level=info msg="runSandbox: removing pod sandbox dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.680330076Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.680341777Z" level=info msg="runSandbox: unmounting shmPath for sandbox dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.693279452Z" level=info msg="runSandbox: deleting pod ID c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e from idIndex" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.693307118Z" level=info msg="runSandbox: removing pod sandbox c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.693319501Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.693331364Z" level=info msg="runSandbox: unmounting shmPath for sandbox c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.697281748Z" level=info msg="runSandbox: removing pod sandbox from storage: dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.701269232Z" level=info msg="runSandbox: removing pod sandbox from storage: c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.701289555Z" level=info msg="runSandbox: deleting pod ID b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10 from idIndex" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.701317879Z" level=info msg="runSandbox: removing pod sandbox b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.701332062Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.701344900Z" level=info msg="runSandbox: unmounting shmPath for sandbox b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.705532452Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.705552923Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=87827cf3-50e6-42b7-863f-be62fe893e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.705790 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.705841 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.705864 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.705916 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.709281029Z" level=info msg="runSandbox: deleting pod ID 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982 from idIndex" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.709311480Z" level=info msg="runSandbox: removing pod sandbox 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.709325218Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.709336344Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.713296656Z" level=info msg="runSandbox: removing pod sandbox from storage: b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.721282914Z" level=info msg="runSandbox: removing pod sandbox from storage: 2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.721500651Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.721520998Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b166e034-b448-4586-a47b-2de16ec61d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.721762 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.721797 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.721820 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.721859 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.737649055Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.737674397Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=1c8a29c4-6ab1-4fc2-8ade-2331c0ebd929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.737862 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.737894 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.737915 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.737956 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.753535824Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.753554940Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9ce5c2ea-3019-4624-a038-5dad7ce5f07b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.753740 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.753772 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.753792 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.753828 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.861184408Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.861228316Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.944294514Z" level=info msg="runSandbox: deleting pod ID 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728 from idIndex" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.944320709Z" level=info msg="runSandbox: removing pod sandbox 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.944333487Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.944344127Z" level=info msg="runSandbox: unmounting shmPath for sandbox 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.952289448Z" level=info msg="runSandbox: removing pod sandbox from storage: 81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.960572313Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:20.960601133Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f31f24d9-3c95-441b-bfe6-2ef3e55c47e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.960778 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.960814 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.960836 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.960877 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:20.996274 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:17:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:20.996783 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:21.318819 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:21.318968 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:21.319034 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319053427Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319096277Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:21.319180 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:21.319367 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319371302Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319405857Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319432078Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319464253Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319533850Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319566373Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319665267Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.319700370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.344149224Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/618e7f42-c1c0-4317-84d5-87a5e01414ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.344173369Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.347306099Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/168a03b2-d6e8-4ccb-a95b-51fc64aaeedb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.347327258Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.348062519Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c0f2a065-6d65-4afb-86ba-1d800647121c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.348086178Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.349736128Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/55987563-5a97-4997-9b3e-c3c092b18040 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.349755329Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.350502645Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/9147a0e4-4800-4753-a841-ec836c4c3135 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:21.350521947Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-14072687\x2d8a39\x2d43f4\x2daeda\x2d907065e5f3a0.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-10f72223\x2d299a\x2d4e15\x2d833e\x2d6ef03c2ba59a.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a1984ae3\x2dce69\x2d4aa4\x2da067\x2d829ad707085e.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-50cbaa03\x2d656f\x2d44ae\x2da9fa\x2d14729f05674c.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-62a4f480\x2d5302\x2d4186\x2d9243\x2d131e0e30c82c.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-81774d6d9288cebdaf816a7ce109c90bf5cf350d443f5d452f66520a20c46728-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b34062fc72d5d8aad46dfd7eb7013c6267217f4baf50737397162c42341a4d10-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2064cafd9d58a98abe3a52e2932206cfdbf290cf577ffd6a3759d308b89bc982-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c24d8611aa29c1cb7bfada92b4ff876c6a079cac68b2944a60d0e1ac22fc013e-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dee62fdd442c251918aa422a80a5b3a7247b9afed5038bdb44f407a02206d59d-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855909 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855930 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855937 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855945 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855951 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855960 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:27.855967 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:28.149132697Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:17:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:30.484906 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:17:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:34.996495 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.997297500Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=e2f7a321-c6fc-41bf-a208-a75925dc913f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.997471622Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e2f7a321-c6fc-41bf-a208-a75925dc913f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.998066305Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=82da5d0b-5a70-40f9-b3c1-ac7069e6404d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.998242439Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=82da5d0b-5a70-40f9-b3c1-ac7069e6404d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.999258014Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=503fe998-cb91-4110-a3e7-6a03f2c77fe6 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:34.999348477Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope. -- Subject: Unit crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165. -- Subject: Unit crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.113037233Z" level=info msg="Created container 401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=503fe998-cb91-4110-a3e7-6a03f2c77fe6 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.113609306Z" level=info msg="Starting container: 401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" id=2e660c48-7de7-4916-a2ea-ccb49e948948 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.121218547Z" level=info msg="Started container" PID=18000 containerID=401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=2e660c48-7de7-4916-a2ea-ccb49e948948 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.125352680Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.135684797Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.135706271Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.135716711Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.145545012Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.145567622Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.145582558Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.154566646Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.154582127Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.154590947Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.162660086Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.162675050Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.162683357Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.171955652Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:17:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:35.171973814Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:17:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:35.347220 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/177.log" Jan 23 16:17:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:35.348145 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165} Jan 23 16:17:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:35.348531 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:17:35 hub-master-0.workload.bos2.lab conmon[17988]: conmon 401fbba2d131a0bee3fa : container 18000 exited with status 1 Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has successfully entered the 'dead' state. Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope: Consumed 564ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope completed and consumed the indicated resources. Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope has successfully entered the 'dead' state. Jan 23 16:17:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope: Consumed 47ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165.scope completed and consumed the indicated resources. Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.351941 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/178.log" Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.352617 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/177.log" Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.353759 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" exitCode=1 Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.353777 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165} Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.353795 8631 scope.go:115] "RemoveContainer" containerID="dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:36.354673 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:17:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:36.354766268Z" level=info msg="Removing container: dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61" id=9a76148b-3593-4c00-a2f4-8e415a776300 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:17:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:36.355229 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:17:36 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-322e566ffdc056c58079a2ca96037a0cce77e7616d94761a94e93b2b153efe2b-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-322e566ffdc056c58079a2ca96037a0cce77e7616d94761a94e93b2b153efe2b-merged.mount has successfully entered the 'dead' state. Jan 23 16:17:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:36.390174093Z" level=info msg="Removed container dd8e3c10002f1232ffc1d47378587d8a887d344ac7bc85b21aee246ccb252d61: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9a76148b-3593-4c00-a2f4-8e415a776300 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:17:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:37.357997 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/178.log" Jan 23 16:17:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:37.359862 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:17:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:37.360347 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:17:40 hub-master-0.workload.bos2.lab sshd[12959]: Received disconnect from 2600:52:7:18::11 port 48246:11: disconnected by user Jan 23 16:17:40 hub-master-0.workload.bos2.lab sshd[12959]: Disconnected from user core 2600:52:7:18::11 port 48246 Jan 23 16:17:40 hub-master-0.workload.bos2.lab sshd[12818]: pam_unix(sshd:session): session closed for user core Jan 23 16:17:40 hub-master-0.workload.bos2.lab systemd[1]: session-1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-1.scope has successfully entered the 'dead' state. Jan 23 16:17:40 hub-master-0.workload.bos2.lab systemd-logind[3052]: Session 1 logged out. Waiting for processes to exit. Jan 23 16:17:40 hub-master-0.workload.bos2.lab systemd[1]: session-1.scope: Consumed 182ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-1.scope completed and consumed the indicated resources. Jan 23 16:17:40 hub-master-0.workload.bos2.lab systemd-logind[3052]: Removed session 1. -- Subject: Session 1 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 1 has been terminated. Jan 23 16:17:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:43.518427 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-master-fld8m" Jan 23 16:17:50 hub-master-0.workload.bos2.lab systemd[1]: Stopping User Manager for UID 1000... -- Subject: Unit user@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopping Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopping podman-pause-d01ba5fc.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped podman-pause-d01ba5fc.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab sh[18575]: time="2023-01-23T16:17:51Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:17:51 hub-master-0.workload.bos2.lab sh[18575]: Error: you must provide at least one name or id Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: podman-restart.service: Control process exited, code=exited status=125 Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: podman-restart.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed Podman API Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Closed GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped Podman auto-update timer. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Stopped Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[12877]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service has successfully entered the 'dead' state. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: Stopped User Manager for UID 1000. -- Subject: Unit user@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Consumed 1.509s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service completed and consumed the indicated resources. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: Stopping User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: run-user-1000.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-1000.mount has successfully entered the 'dead' state. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service has successfully entered the 'dead' state. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: Stopped User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Consumed 3ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service completed and consumed the indicated resources. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: Removed slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished shutting down. Jan 23 16:17:51 hub-master-0.workload.bos2.lab systemd[1]: user-1000.slice: Consumed 1.700s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-1000.slice completed and consumed the indicated resources. Jan 23 16:17:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:17:51.997203 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:17:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:17:51.997943 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.028770134Z" level=info msg="NetworkStart: stopping network for sandbox a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.028828590Z" level=info msg="NetworkStart: stopping network for sandbox 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029350405Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/1561b91c-eded-4df6-a169-9268b689cc1d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029375784Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029384882Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029392242Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029379219Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/60d90c73-6437-4569-8a5b-c4e75a9d05b7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029572618Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029580571Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:55.029588552Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:56.022294783Z" level=info msg="NetworkStart: stopping network for sandbox 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:56.022430560Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/967ec2f3-b32c-47de-8db1-17b29b2c7e99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:56.022453235Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:56.022460488Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:56.022467716Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:57.019854647Z" level=info msg="NetworkStart: stopping network for sandbox 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:57.020004817Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7632f6b2-0c70-437b-a319-5cd4338fad6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:57.020031381Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:57.020039423Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:57.020046615Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.019270442Z" level=info msg="NetworkStart: stopping network for sandbox ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.019414588Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5c222525-139f-4b43-a0aa-ff6b8a50602c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.019437857Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.019444179Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.019450421Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:58.147014280Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.039839217Z" level=info msg="NetworkStart: stopping network for sandbox 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.039982853Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/345d67b6-b938-44ed-b9d6-4573b7e2960c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.040005611Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.040012476Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.040019775Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046343826Z" level=info msg="NetworkStart: stopping network for sandbox 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046476773Z" level=info msg="NetworkStart: stopping network for sandbox ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046486991Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/96122fcb-9ebd-4da5-b4c2-8646901b9ab3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046594144Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/72c28500-edd6-4b68-9b49-891cfac62f9c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046609836Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046617977Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046625979Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046619569Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046698350Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.046706253Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.050536705Z" level=info msg="NetworkStart: stopping network for sandbox 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.050649643Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7b1ae578-6894-4e2b-8844-7257f2969fba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.050674616Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.050682336Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.050689333Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055412086Z" level=info msg="NetworkStart: stopping network for sandbox c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055561724Z" level=info msg="NetworkStart: stopping network for sandbox 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055590467Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/29ecbe4e-16ce-4d10-9ed5-4ce31a234d53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055641191Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055656156Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055667356Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055702984Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/21f7aefc-3661-45bc-a3b0-88c4a031f86d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055728469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055734741Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.055740675Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.056582575Z" level=info msg="NetworkStart: stopping network for sandbox 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.056704406Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8f9e42d1-bbb1-44d1-986d-393b4a3ec65e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.056726325Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.056734050Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:17:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:17:59.056741350Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:02.996305 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:18:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:02.996955 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.357936557Z" level=info msg="NetworkStart: stopping network for sandbox ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.358375434Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/618e7f42-c1c0-4317-84d5-87a5e01414ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.358402045Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.358409244Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.358416231Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359666568Z" level=info msg="NetworkStart: stopping network for sandbox 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359697545Z" level=info msg="NetworkStart: stopping network for sandbox 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359800217Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c0f2a065-6d65-4afb-86ba-1d800647121c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359824060Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359832006Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359839218Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359850479Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/168a03b2-d6e8-4ccb-a95b-51fc64aaeedb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359877394Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359885352Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.359891809Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.361529824Z" level=info msg="NetworkStart: stopping network for sandbox 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.361653378Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/55987563-5a97-4997-9b3e-c3c092b18040 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.361674702Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.361681969Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.361689332Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.364167698Z" level=info msg="NetworkStart: stopping network for sandbox e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.364304200Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/9147a0e4-4800-4753-a841-ec836c4c3135 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.364331098Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.364338891Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:18:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:06.364346449Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:13.997082 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:18:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:13.997741 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856414 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856434 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856440 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856447 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856452 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856460 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.856465 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:27.997554 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:27.998049 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:18:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:28.145672873Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:18:37 hub-master-0.workload.bos2.lab chronyd[2922]: Selected source 69.89.207.99 (2.rhel.pool.ntp.org) Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1193] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1198] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1199] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1200] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1206] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490718.1211] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.040841693Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.041092755Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.040905884Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.041280373Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-60d90c73\x2d6437\x2d4569\x2d8a5b\x2dc4e75a9d05b7.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1561b91c\x2deded\x2d4df6\x2da169\x2d9268b689cc1d.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087357364Z" level=info msg="runSandbox: deleting pod ID 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06 from idIndex" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087390429Z" level=info msg="runSandbox: removing pod sandbox 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087408597Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087422036Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087363344Z" level=info msg="runSandbox: deleting pod ID a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137 from idIndex" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087485077Z" level=info msg="runSandbox: removing pod sandbox a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087499427Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.087516568Z" level=info msg="runSandbox: unmounting shmPath for sandbox a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.099437128Z" level=info msg="runSandbox: removing pod sandbox from storage: a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.099477008Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.102948472Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.102968716Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=2cab9c81-167b-4311-a0d1-3ea160a11dfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.103186 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.103239 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.103262 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.103314 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93a5dfff76835c33b06a893554af2b3a87086fb362c109756fc075d04964137): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.106783173Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:40.106803548Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=6411c9ef-4927-4fbd-9c25-a97b41453637 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.106919 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.106951 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.106971 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.107013 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8d999710ad2e05bd186e0db596119cb4ee06df1119c1a4d54b0d1d9ecdb92e06): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:18:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490720.2084] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:40.997462 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:18:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:40.997981 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.032987710Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.033024914Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount has successfully entered the 'dead' state. Jan 23 16:18:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount has successfully entered the 'dead' state. Jan 23 16:18:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-967ec2f3\x2db32c\x2d47de\x2d8db1\x2d17b29b2c7e99.mount has successfully entered the 'dead' state. Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.085272350Z" level=info msg="runSandbox: deleting pod ID 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26 from idIndex" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.085300138Z" level=info msg="runSandbox: removing pod sandbox 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.085316404Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.085330282Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.106423232Z" level=info msg="runSandbox: removing pod sandbox from storage: 9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.109866397Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:41.109884325Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=67482e26-627c-49a0-a3fe-5e61413619cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:41.110065 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:41.110107 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:41.110135 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:41.110180 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9a230e36e1a001468bf4da9ddb88ad0f297524e17ddcf578b8db1e8eb46adb26): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.030798968Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.030835455Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount has successfully entered the 'dead' state. Jan 23 16:18:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount has successfully entered the 'dead' state. Jan 23 16:18:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7632f6b2\x2d0c70\x2d437b\x2da319\x2d5cd4338fad6a.mount has successfully entered the 'dead' state. Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.065297612Z" level=info msg="runSandbox: deleting pod ID 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef from idIndex" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.065326486Z" level=info msg="runSandbox: removing pod sandbox 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.065341016Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.065353577Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.086443599Z" level=info msg="runSandbox: removing pod sandbox from storage: 2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.090013584Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:42.090032005Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=de83a8ec-4d71-4fe1-a837-3228daf98754 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:42.090245 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:42.090402 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:18:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:42.090425 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:18:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:42.090475 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(2bf788d2d0993d56b1ab3ba72584ff20736ebc8645eb680374aef629a31b05ef): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.030165842Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.030202929Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount has successfully entered the 'dead' state. Jan 23 16:18:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount has successfully entered the 'dead' state. Jan 23 16:18:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5c222525\x2d139f\x2d4b43\x2da0aa\x2dff6b8a50602c.mount has successfully entered the 'dead' state. Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.075303059Z" level=info msg="runSandbox: deleting pod ID ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10 from idIndex" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.075329182Z" level=info msg="runSandbox: removing pod sandbox ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.075345393Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.075358376Z" level=info msg="runSandbox: unmounting shmPath for sandbox ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.087429175Z" level=info msg="runSandbox: removing pod sandbox from storage: ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.094897829Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:43.094928037Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f3b46888-64c8-4edc-831c-ca683e51fcc5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:43.095092 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:43.095141 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:43.095163 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:43.095225 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ca270bdb1e3f5f8d11796493bf7c582185ee4e0789608d9c5c790abb1cf00a10): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.050396568Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.050435883Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.057499488Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.057532335Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.058574836Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.058607413Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.060103421Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.060137796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.065841520Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.065872589Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.066905536Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.066933754Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.068000030Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.068033839Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount has successfully entered the 'dead' state. Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.088314770Z" level=info msg="runSandbox: deleting pod ID 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce from idIndex" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.088342394Z" level=info msg="runSandbox: removing pod sandbox 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.088356107Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.088367833Z" level=info msg="runSandbox: unmounting shmPath for sandbox 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100306801Z" level=info msg="runSandbox: deleting pod ID 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71 from idIndex" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100326122Z" level=info msg="runSandbox: deleting pod ID 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950 from idIndex" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100357139Z" level=info msg="runSandbox: removing pod sandbox 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100369670Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100382304Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100322135Z" level=info msg="runSandbox: deleting pod ID 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf from idIndex" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100437433Z" level=info msg="runSandbox: removing pod sandbox 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100453760Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100466233Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100309006Z" level=info msg="runSandbox: deleting pod ID ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592 from idIndex" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100525724Z" level=info msg="runSandbox: removing pod sandbox ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100539973Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100553922Z" level=info msg="runSandbox: unmounting shmPath for sandbox ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100333593Z" level=info msg="runSandbox: removing pod sandbox 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100590480Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.100603963Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105342063Z" level=info msg="runSandbox: deleting pod ID c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea from idIndex" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105368888Z" level=info msg="runSandbox: removing pod sandbox c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105383283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105397520Z" level=info msg="runSandbox: unmounting shmPath for sandbox c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105347572Z" level=info msg="runSandbox: deleting pod ID 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730 from idIndex" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105460121Z" level=info msg="runSandbox: removing pod sandbox 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105473506Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105485442Z" level=info msg="runSandbox: unmounting shmPath for sandbox 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.105441144Z" level=info msg="runSandbox: removing pod sandbox from storage: 315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.109211424Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.109232982Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=06bafdca-ad92-4a27-b70f-4dd628e567ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.109453 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.109499 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.109520 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.109566 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.117505765Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.117523143Z" level=info msg="runSandbox: removing pod sandbox from storage: 41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.117521120Z" level=info msg="runSandbox: removing pod sandbox from storage: ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.117527316Z" level=info msg="runSandbox: removing pod sandbox from storage: 4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.120696997Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.120715465Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=95cdfefc-7c38-4941-b563-a3a060766c88 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.120885 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.120916 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.120936 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.120975 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.121447590Z" level=info msg="runSandbox: removing pod sandbox from storage: c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.122438766Z" level=info msg="runSandbox: removing pod sandbox from storage: 97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.123883240Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.123902470Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=388a270f-180e-4d8b-a982-27661c9241e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.124171 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.124225 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.124252 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.124312 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.126835041Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.126852164Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=e7dc8422-763a-40d9-85d0-7d15b332acca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.127012 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.127046 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.127068 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.127107 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.129867716Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.129888237Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=9346d905-7bcc-4295-b7ed-2ea8ea6c8a5b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.130138 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.130171 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.130192 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.130239 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.132958739Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.132979233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=0d87279b-0989-479d-b0f7-da8119985593 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.133184 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.133218 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.133240 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.133275 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.136051317Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:44.136070332Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=37d67851-3923-4d67-9acc-3da9b8bc7c14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.136168 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.136199 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.136224 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:18:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:44.136262 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-29ecbe4e\x2d16ce\x2d4d10\x2d9ed5\x2d4ce31a234d53.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-21f7aefc\x2d3661\x2d45bc\x2da3b0\x2d88c4a031f86d.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8f9e42d1\x2dbbb1\x2d44d1\x2d986d\x2d393b4a3ec65e.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4a4d457e4b9218d47bc9799e1f6aa14a70beb3dc45238b58af18eb47b298a950-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7b1ae578\x2d6894\x2d4e2b\x2d8844\x2d7257f2969fba.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c9ae9c5ef9b880931231b790ddcc07259545911e0eea083d03c89fdd02f753ea-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-97e9caf10c2664fb916255bbb105b5a3cad7a7ba93bf0951b2ace8f658941730-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-96122fcb\x2d9ebd\x2d4da5\x2db4c2\x2d8646901b9ab3.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-72c28500\x2dedd6\x2d4b68\x2d9b49\x2d891cfac62f9c.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b0a1dc117ea5cecaec2cc14bfc21a17dd8ad0a70b9e153af010581bc7a68ccf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-345d67b6\x2db938\x2d44ed\x2db9d6\x2d4573b7e2960c.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-41c4096daee74c727c9051e9cb0b03a3661cddeffc568d9951d53930eb299d71-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ac2514e07e63614e20068fbba9a2bb954eb488d1e9d26d2776273d7161cf7592-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-315ec2b567ce5fe4b4b83b709fa3d396b6b35ae755045158e22e1d3af08048ce-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370488127Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370535467Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370598261Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370635578Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370738624Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.370766872Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.371559426Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.371594205Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.375258110Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.375296480Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.423308989Z" level=info msg="runSandbox: deleting pod ID 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3 from idIndex" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.423336498Z" level=info msg="runSandbox: removing pod sandbox 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.423353095Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.423369512Z" level=info msg="runSandbox: unmounting shmPath for sandbox 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427336470Z" level=info msg="runSandbox: deleting pod ID e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740 from idIndex" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427368847Z" level=info msg="runSandbox: removing pod sandbox e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427382841Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427397746Z" level=info msg="runSandbox: unmounting shmPath for sandbox e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427408488Z" level=info msg="runSandbox: deleting pod ID 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec from idIndex" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427340009Z" level=info msg="runSandbox: deleting pod ID 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d from idIndex" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427462407Z" level=info msg="runSandbox: removing pod sandbox 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427478012Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427490449Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427341778Z" level=info msg="runSandbox: deleting pod ID ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6 from idIndex" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427520558Z" level=info msg="runSandbox: removing pod sandbox ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427535067Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427546287Z" level=info msg="runSandbox: unmounting shmPath for sandbox ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427438083Z" level=info msg="runSandbox: removing pod sandbox 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427629022Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.427644343Z" level=info msg="runSandbox: unmounting shmPath for sandbox 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.439482004Z" level=info msg="runSandbox: removing pod sandbox from storage: 74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.440462818Z" level=info msg="runSandbox: removing pod sandbox from storage: ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.442930884Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.442949730Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9bcc6278-40e6-42f8-a818-68e3f259a387 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.443219 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.443265 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.443285 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.443331 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.443455264Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.443479359Z" level=info msg="runSandbox: removing pod sandbox from storage: 548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.443457612Z" level=info msg="runSandbox: removing pod sandbox from storage: e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.446289671Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.446308412Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=22a1fc5c-a84f-48e0-83e1-a92cbe8a1f73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.446611 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.446649 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.446672 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.446721 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.449451776Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.449474265Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=7df0b47a-ae84-46ff-8e65-aea373fd6603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.449618 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.449646 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.449665 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.449701 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.452527152Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.452545255Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=68d65f17-b5f1-4ffb-8c09-cafb36d3d6a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.452763 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.452798 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.452820 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.452860 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.455474584Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.455494680Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=44e135a1-e1be-4381-bb3a-ccac559864b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.455722 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.455755 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.455777 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.455814 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9147a0e4\x2d4800\x2d4753\x2da841\x2dec836c4c3135.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-55987563\x2d5a97\x2d4997\x2d9b3e\x2dc3c092b18040.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c0f2a065\x2d6d65\x2d4afb\x2d86ba\x2d1d800647121c.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-168a03b2\x2dd6e8\x2d4ccb\x2da95b\x2d51fc64aaeedb.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-618e7f42\x2dc1c0\x2d4317\x2d84d5\x2d87a5e01414ca.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e58be35c640857bbe56a8500a7d9fb69ca4d6a919ddefba037a4296fffdd5740-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3d4d0633987a4c9fc14d78a003963382aeae7e3fcf9f9d93ae03728ab0244c1d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ceb0c5f7c60d22cf90e7db1fe851ef5236ed5f750f5a4367e6e62f3f5762a9a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-548f0078e3ea9404d92985650ef5093ebb0fc9fdf921d82a6ebe19ac346057ec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-74750e971e6b266021604f4308e4aa70c644b6b6c513ece8f45a38f780b6c5e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.496826 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.496925 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.497138 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497185705Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497230889Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.497231 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497278732Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497306374Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.497337 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497398698Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497430731Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497513565Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497545766Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497519036Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.497604518Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.521459598Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/7cecf817-54a9-4a07-a14b-2c63eb2c5aa5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.521606998Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.524793766Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/ac39ae98-c7f7-461e-a548-a125ab5cad56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.524816374Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.527472461Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ee200082-0410-4a48-a6c7-272fa53418f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.527493070Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.531045698Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a8baf406-4d74-4f21-8848-c6416ecfde95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.531067119Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.531919470Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ed6c97ee-de22-4fd9-9dd5-efe010e5272d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.531941237Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.995999 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.996422373Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:51.996460924Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:51.996757 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:18:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:18:51.997264 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:18:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:52.007040310Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/dacd4aca-760c-445b-8f00-bbd8db96c143 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:52.007062570Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:53.996142 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:53.996246 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:53.996517653Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:53.996553767Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:53.996633711Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:53.996662744Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.015232331Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/438a2a83-0bc4-4204-8879-aeb4fdf0d710 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.015264171Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.017455998Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/189679d9-8ded-4bb9-a914-4e0809035e52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.017475841Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:54.996028 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:54.996143 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:54.996289 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996381253Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996425583Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996529340Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996552960Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996597562Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:54.996564627Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.014138318Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b87abc3-4053-46ec-b351-bf6de2808dde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.014160296Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.015806326Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bebd0895-5cdf-462c-8288-1c119faa3e13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.015828697Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.016758936Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ab6eb2eb-ee40-4335-a882-ea110e579f10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.016780363Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:55.995961 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.996376805Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:55.996414725Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:56.006738140Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/e7f17118-a30a-4506-ade7-40a7bd61a80a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:56.006764543Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:57.996130 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:57.996291 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.996650591Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.996686562Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.996763000Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.996794033Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:57.997043 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.997267313Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:57.997294236Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.010414476Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/31efc2c9-8c74-4e74-90c8-bc1baf5702b4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.010432830Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.012054808Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9048a124-68ed-4701-8177-80310566e7de Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.012080225Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.020434187Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/142a6841-2287-4e79-a7ff-103322c5e520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.020456991Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.145424640Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:58.995727 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:18:58.995891 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.996065803Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.996101700Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.996198767Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:58.996248347Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:59.010159645Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d981c481-0c94-45a9-968b-8135a8b5691e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:59.010179435Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:59.012406373Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e0d9b43-7902-4062-ab49-aa1fc59bc60e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:18:59.012429264Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:02.996704 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.997486976Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=c95b0a29-f842-4c36-a744-a587096160da name=/runtime.v1.ImageService/ImageStatus Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.997645885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c95b0a29-f842-4c36-a744-a587096160da name=/runtime.v1.ImageService/ImageStatus Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.998127651Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=0e65150d-d292-44b2-8829-8da2a3017b2b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.998215888Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0e65150d-d292-44b2-8829-8da2a3017b2b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.999304540Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=96c1570f-fa3b-4866-8563-834185116aab name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:02.999376718Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope. -- Subject: Unit crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983. -- Subject: Unit crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.114018719Z" level=info msg="Created container 8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=96c1570f-fa3b-4866-8563-834185116aab name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.114512128Z" level=info msg="Starting container: 8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" id=9f2467e9-c430-495f-afc3-9cb98152deff name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.133761080Z" level=info msg="Started container" PID=20872 containerID=8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=9f2467e9-c430-495f-afc3-9cb98152deff name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.138084862Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.148334799Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.148349993Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.148359517Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.156728316Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.156744915Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.156754325Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.165272703Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.165297913Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.165308641Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.173484730Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.173500232Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.173510936Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.181211551Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:03.181231802Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:19:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:03.519629 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/178.log" Jan 23 16:19:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:03.520756 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983} Jan 23 16:19:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:03.521114 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:19:03 hub-master-0.workload.bos2.lab conmon[20857]: conmon 8db07ceee8f9d189ad62 : container 20872 exited with status 1 Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has successfully entered the 'dead' state. Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope: Consumed 569ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope completed and consumed the indicated resources. Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope has successfully entered the 'dead' state. Jan 23 16:19:03 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope: Consumed 44ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983.scope completed and consumed the indicated resources. Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.524118 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/179.log" Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.524699 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/178.log" Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.525757 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" exitCode=1 Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.525785 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983} Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.525806 8631 scope.go:115] "RemoveContainer" containerID="401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:04.526701 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:04.526736702Z" level=info msg="Removing container: 401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165" id=51d3cea5-940c-4274-a45f-28cdf0926e63 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:04.527224 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:04 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-62fda277706cdbe5400a2144d51e8b0abd92321341fbeb968686a1dd7494394e-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-62fda277706cdbe5400a2144d51e8b0abd92321341fbeb968686a1dd7494394e-merged.mount has successfully entered the 'dead' state. Jan 23 16:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:04.560473205Z" level=info msg="Removed container 401fbba2d131a0bee3faed1c57cb52f3ae5d45bfe17cb4c9052efcd24fea5165: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=51d3cea5-940c-4274-a45f-28cdf0926e63 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:19:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:05.529458 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/179.log" Jan 23 16:19:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:05.531281 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:05.531797 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:16.997011 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:16.997516 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856890 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856911 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856921 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856931 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856937 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856944 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:27.856950 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:19:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:28.144264623Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:19:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:30.001018 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:30.001697 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.535721686Z" level=info msg="NetworkStart: stopping network for sandbox b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.535870110Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/7cecf817-54a9-4a07-a14b-2c63eb2c5aa5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.535893939Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.535900755Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.535908143Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.537835845Z" level=info msg="NetworkStart: stopping network for sandbox 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.537974015Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/ac39ae98-c7f7-461e-a548-a125ab5cad56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.537996287Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.538004883Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.538011027Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.541604219Z" level=info msg="NetworkStart: stopping network for sandbox 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.541753993Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ee200082-0410-4a48-a6c7-272fa53418f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.541776761Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.541783990Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.541790544Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545455386Z" level=info msg="NetworkStart: stopping network for sandbox 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545581445Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a8baf406-4d74-4f21-8848-c6416ecfde95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545608149Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545616078Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545622991Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545719574Z" level=info msg="NetworkStart: stopping network for sandbox 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545847596Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ed6c97ee-de22-4fd9-9dd5-efe010e5272d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545871015Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545878989Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:36.545886316Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:37.021182140Z" level=info msg="NetworkStart: stopping network for sandbox fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:37.021339416Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/dacd4aca-760c-445b-8f00-bbd8db96c143 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:37.021364051Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:37.021372171Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:37.021380704Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.488646 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-bcwzw] Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.488694 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:19:38 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice. -- Subject: Unit kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.528222 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-bcwzw] Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.602362 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpgq\" (UniqueName: \"kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.602391 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.602425 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.602445 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703010 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703040 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703057 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-szpgq\" (UniqueName: \"kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703073 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703142 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703264 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.703514 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.716953 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpgq\" (UniqueName: \"kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq\") pod \"cni-sysctl-allowlist-ds-bcwzw\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:38.803523 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:19:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:38.803928445Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-bcwzw/POD" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:38.803971313Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:19:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:38.815703505Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-bcwzw Namespace:openshift-multus ID:b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69 UID:a316a3e8-6ad7-4240-b3a4-3752ec397ce5 NetNS:/var/run/netns/a31daa7d-d853-4e54-aedf-a49ca6724db9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:38.815728789Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-bcwzw to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.028372856Z" level=info msg="NetworkStart: stopping network for sandbox 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.028532365Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/438a2a83-0bc4-4204-8879-aeb4fdf0d710 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.028557127Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.028565387Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.028572276Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.029818644Z" level=info msg="NetworkStart: stopping network for sandbox a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.029947295Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/189679d9-8ded-4bb9-a914-4e0809035e52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.029968840Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.029974901Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:39.029981498Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.027459411Z" level=info msg="NetworkStart: stopping network for sandbox 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.027586475Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b87abc3-4053-46ec-b351-bf6de2808dde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.027608895Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.027615044Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.027620852Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029157200Z" level=info msg="NetworkStart: stopping network for sandbox b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029297901Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bebd0895-5cdf-462c-8288-1c119faa3e13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029321707Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029331143Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029339413Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029547744Z" level=info msg="NetworkStart: stopping network for sandbox 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029680983Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ab6eb2eb-ee40-4335-a882-ea110e579f10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029706739Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029717476Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:40.029727125Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:41.019881995Z" level=info msg="NetworkStart: stopping network for sandbox 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:41.020172225Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/e7f17118-a30a-4506-ade7-40a7bd61a80a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:41.020199803Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:41.020213450Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:41.020220849Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025254973Z" level=info msg="NetworkStart: stopping network for sandbox 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025415747Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/31efc2c9-8c74-4e74-90c8-bc1baf5702b4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025440339Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025446850Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025453689Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.025944608Z" level=info msg="NetworkStart: stopping network for sandbox 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.026059975Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9048a124-68ed-4701-8177-80310566e7de Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.026082985Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.026090097Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.026096351Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.032984119Z" level=info msg="NetworkStart: stopping network for sandbox 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.033099036Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/142a6841-2287-4e79-a7ff-103322c5e520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.033121275Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.033127979Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:43.033134395Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.022536820Z" level=info msg="NetworkStart: stopping network for sandbox f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.022700694Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d981c481-0c94-45a9-968b-8135a8b5691e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.022722860Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.022729415Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.022735719Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.025287161Z" level=info msg="NetworkStart: stopping network for sandbox dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.025420916Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e0d9b43-7902-4062-ab49-aa1fc59bc60e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.025443742Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.025450682Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:19:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:44.025457971Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:19:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:44.996719 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:44.997229 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:48 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00093|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 16:19:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:19:55.996389 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:19:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:19:55.996885 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:19:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:19:58.142808061Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1225] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1230] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1231] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1482] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1483] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1495] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1497] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1498] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1499] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1503] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490808.1507] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:20:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490809.9102] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:20:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:10.997257 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:20:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:10.997827 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.548035197Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.548251674Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.548994870Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.549032195Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.553218124Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.553254365Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.556594084Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.556626142Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.557343716Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.557372524Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount has successfully entered the 'dead' state. Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587282173Z" level=info msg="runSandbox: deleting pod ID 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e from idIndex" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587306136Z" level=info msg="runSandbox: removing pod sandbox 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587320908Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587333656Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587304829Z" level=info msg="runSandbox: deleting pod ID b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa from idIndex" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587402653Z" level=info msg="runSandbox: removing pod sandbox b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587420361Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.587438449Z" level=info msg="runSandbox: unmounting shmPath for sandbox b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591288973Z" level=info msg="runSandbox: deleting pod ID 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350 from idIndex" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591316549Z" level=info msg="runSandbox: removing pod sandbox 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591330845Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591347597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591292758Z" level=info msg="runSandbox: deleting pod ID 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354 from idIndex" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591405627Z" level=info msg="runSandbox: removing pod sandbox 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591420944Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.591435005Z" level=info msg="runSandbox: unmounting shmPath for sandbox 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.595296615Z" level=info msg="runSandbox: deleting pod ID 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13 from idIndex" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.595321715Z" level=info msg="runSandbox: removing pod sandbox 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.595333899Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.595349636Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.599440450Z" level=info msg="runSandbox: removing pod sandbox from storage: 3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.599439535Z" level=info msg="runSandbox: removing pod sandbox from storage: b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.602455538Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.602475669Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=23657771-1a5c-4ecc-ba90-9c7e4376a62b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.602900 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.602942 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.602963 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.603008 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.603482341Z" level=info msg="runSandbox: removing pod sandbox from storage: 108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.603775986Z" level=info msg="runSandbox: removing pod sandbox from storage: 985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.605644563Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.605662644Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=675ba6c6-9d95-498f-a711-154f46286b93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.605890 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.605923 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.605943 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.605981 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.609383890Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.609408117Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=6caca365-43dd-486f-bf90-cc607c54f609 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.609641 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.609675 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.609696 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.609732 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.611521749Z" level=info msg="runSandbox: removing pod sandbox from storage: 8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.612742828Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.612764133Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c54e014c-24fd-407c-a57a-5ffa2d2a88ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.612959 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.612989 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.613009 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.613043 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.615986253Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.616007199Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ebcfd213-3dee-4341-b15e-90f830d1d8e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.616303 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.616332 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.616355 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:21.616391 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:21.675114 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:21.675257 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:21.675318 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675457261Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675490777Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:21.675571 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675565873Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675595914Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:21.675594 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675577098Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675679450Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675739392Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675765152Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.675990608Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.676022829Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.707228456Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/36ae9eb8-8f88-456c-82b6-fb4d7e8511c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.707255144Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.707814305Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a959a028-05e0-4a1b-8772-3a37fb251d3f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.707837824Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.709515055Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c2f8eb26-2510-4422-a20c-4853c6837ad1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.709536093Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.712389634Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/09180d93-ac2b-446e-a32e-482ee43aea56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.712410835Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.713119567Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a762ce1d-1011-414a-9ffd-eaeed9da0efd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:21.713141637Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.032557504Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.032597326Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.072380676Z" level=info msg="runSandbox: deleting pod ID fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc from idIndex" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.072410700Z" level=info msg="runSandbox: removing pod sandbox fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.072426519Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.072439884Z" level=info msg="runSandbox: unmounting shmPath for sandbox fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.080461883Z" level=info msg="runSandbox: removing pod sandbox from storage: fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.083915282Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:22.083935362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=06d8f52f-d71e-4a40-8a83-e957ad8f000d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:22.084133 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:22.084313 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:22.084339 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:22.084402 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dacd4aca\x2d760c\x2d445b\x2d8f00\x2dbbd8db96c143.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fee9f15ff2322de7e10d546ca2aeba474d712f87203275a3f566612d9cd599cc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ed6c97ee\x2dde22\x2d4fd9\x2d9dd5\x2defe010e5272d.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a8baf406\x2d4d74\x2d4f21\x2d8848\x2dc6416ecfde95.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee200082\x2d0410\x2d4a48\x2da6c7\x2d272fa53418f5.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8749e6f414900ebf162fc53cd18afaa9b6de5d1d4014cacad6ca8f0cada4fe13-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ac39ae98\x2dc7f7\x2d461e\x2da548\x2da125ab5cad56.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-108c146741d4e1d46ce88c3bcd1e8276d10f897b3ea3d0e45d0916ba374aa354-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7cecf817\x2d54a9\x2d4a07\x2da14b\x2d2c63eb2c5aa5.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-985296b7c6615383dfaa67751e08216dfdad997b9a16dde7bef9aae973e19350-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3bd55a38b2c93723119e12aab645367abde090a322a004616893a60d5c7c4c3e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b10d775df1f3789d1373f9482be11ad50c7be4c0572f1c69d2feb3ef203ecbfa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:23.827995448Z" level=info msg="NetworkStart: stopping network for sandbox b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:23.828146691Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-bcwzw Namespace:openshift-multus ID:b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69 UID:a316a3e8-6ad7-4240-b3a4-3752ec397ce5 NetNS:/var/run/netns/a31daa7d-d853-4e54-aedf-a49ca6724db9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:23.828171749Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:20:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:23.828178636Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:20:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:23.828185317Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-bcwzw from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.039884005Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.039924144Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.041083504Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.041119615Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-189679d9\x2d8ded\x2d4bb9\x2da914\x2d4e0809035e52.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-438a2a83\x2d0bc4\x2d4204\x2d8879\x2daeb4fdf0d710.mount has successfully entered the 'dead' state. Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.083304245Z" level=info msg="runSandbox: deleting pod ID a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd from idIndex" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.083327388Z" level=info msg="runSandbox: removing pod sandbox a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.083340943Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.083353270Z" level=info msg="runSandbox: unmounting shmPath for sandbox a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.084399146Z" level=info msg="runSandbox: deleting pod ID 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77 from idIndex" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.084429229Z" level=info msg="runSandbox: removing pod sandbox 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.084449258Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.084463900Z" level=info msg="runSandbox: unmounting shmPath for sandbox 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.095421977Z" level=info msg="runSandbox: removing pod sandbox from storage: 81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.099155490Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.099177215Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=41dd2673-7d03-497f-8e4c-040f09e9566c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.099309 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.099355 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.099377 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.099426 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.100453309Z" level=info msg="runSandbox: removing pod sandbox from storage: a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.103677572Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:24.103695710Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0e4fa373-246e-4bd8-8065-711f51b1ac8c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.103864 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.103896 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.103916 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:20:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:24.103958 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.037914303Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.037949476Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.040373402Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.040403836Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.041000505Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.041042087Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a6f46fa44724059b1455b64a617f664f0cda8c1473216788e5c2c994029ca9bd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-81eca4ef3b5f0f7605581e4548860a6ba78445357918cea5fe5f96a7065a9f77-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ab6eb2eb\x2dee40\x2d4335\x2da882\x2dea110e579f10.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7b87abc3\x2d4053\x2d46ec\x2db351\x2dbf6de2808dde.mount has successfully entered the 'dead' state. Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077322597Z" level=info msg="runSandbox: deleting pod ID 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d from idIndex" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077356971Z" level=info msg="runSandbox: removing pod sandbox 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077328222Z" level=info msg="runSandbox: deleting pod ID 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26 from idIndex" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077404919Z" level=info msg="runSandbox: removing pod sandbox 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077420928Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077440613Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077427425Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.077523468Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.085278253Z" level=info msg="runSandbox: deleting pod ID b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a from idIndex" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.085306888Z" level=info msg="runSandbox: removing pod sandbox b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.085325075Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.085338555Z" level=info msg="runSandbox: unmounting shmPath for sandbox b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.093429441Z" level=info msg="runSandbox: removing pod sandbox from storage: 6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.093454302Z" level=info msg="runSandbox: removing pod sandbox from storage: 0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.096641046Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.096662304Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5aaf3ad6-2a8f-4088-8d43-26a238094865 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.096941 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.096992 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.097020 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.097080 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.097474372Z" level=info msg="runSandbox: removing pod sandbox from storage: b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.099708488Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.099728207Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=11951799-e9fc-4a0d-b827-25c810b2a423 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.099861 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.099892 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.099913 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.099951 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.102803706Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:25.102824196Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=608e9089-133f-4fcd-85da-3e511ce8727f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.103018 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.103050 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.103071 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.103117 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:25.997148 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:20:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:25.997645 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.031013539Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.031054286Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bebd0895\x2d5cdf\x2d462c\x2d8288\x2d1c119faa3e13.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b3ac9eed8ffb084d9b1a3631a97fadcacf8e17e802433de6470d653b9243fc1a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0abb76c80951199d64f14d3650becdb090bd0a9552c2d27215617da1ce4acc6d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6d49b0fe3c3c1acec5aba093db0a315cf9b3ca781cb52f9b2c4bbf1c14957a26-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e7f17118\x2da30a\x2d4506\x2dade7\x2d40a7bd61a80a.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.080309619Z" level=info msg="runSandbox: deleting pod ID 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081 from idIndex" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.080342014Z" level=info msg="runSandbox: removing pod sandbox 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.080358140Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.080372785Z" level=info msg="runSandbox: unmounting shmPath for sandbox 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.094475635Z" level=info msg="runSandbox: removing pod sandbox from storage: 940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.098053990Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:26.098072924Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c5271c12-225b-4ff1-9ef2-dade558cdfbb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:26.098303 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:26.098340 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:20:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:26.098360 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:20:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:26.098395 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(940dc425587afb35ec9a236ef8851eeafe3ea88ec25f01c989f258f44fe08081): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.853741 8631 kubelet.go:1343] "Image garbage collection succeeded" Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857273 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857289 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857295 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857300 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857306 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857312 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:27.857319 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:20:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:27.862745156Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=9b2d07a1-e4a3-4614-b242-82e87ad71e26 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:20:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:27.862864970Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9b2d07a1-e4a3-4614-b242-82e87ad71e26 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.036669997Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.036706866Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.037153520Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.037191618Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.042771036Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.042799904Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount has successfully entered the 'dead' state. Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088309453Z" level=info msg="runSandbox: deleting pod ID 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f from idIndex" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088342066Z" level=info msg="runSandbox: removing pod sandbox 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088356431Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088312895Z" level=info msg="runSandbox: deleting pod ID 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba from idIndex" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088383429Z" level=info msg="runSandbox: removing pod sandbox 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088393941Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088405387Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.088406524Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.096302030Z" level=info msg="runSandbox: deleting pod ID 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05 from idIndex" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.096323677Z" level=info msg="runSandbox: removing pod sandbox 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.096335618Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.096348518Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.104452476Z" level=info msg="runSandbox: removing pod sandbox from storage: 6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.104452564Z" level=info msg="runSandbox: removing pod sandbox from storage: 2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.107561104Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.107579855Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c7f0ee7b-62d4-4d59-960e-21641a4eb891 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.107783 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.107826 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.107847 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.107891 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.110428896Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.112383139Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=63e975a4-5d69-4de0-875f-9f12ae653950 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.112565942Z" level=info msg="runSandbox: removing pod sandbox from storage: 7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.113281 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.113316 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.113338 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.113380 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.119675522Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.119699700Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f57cb9ba-ab5d-4c4e-9925-c5d285bd752c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.119812 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.119846 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.119867 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:28.119907 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:28.142373560Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.034128741Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.034329711Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.036937364Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.036963327Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-142a6841\x2d2287\x2d4e79\x2da7ff\x2d103322c5e520.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7d9c0473bf4710e42d286bda86da070cfecc3c9cfe9e039ca4d89b0ddb97da05-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9048a124\x2d68ed\x2d4701\x2d8177\x2d80310566e7de.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-31efc2c9\x2d8c74\x2d4e74\x2d90c8\x2dbc1baf5702b4.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2186884100d96caec837a4e96ed4cde0ff1cca00667ebf6f82ae6353ad7f52ba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6ba5c248a7e51dc9dea1dff7aba5aef902afa6fdca1cb7e8c0a090b7b2be445f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1e0d9b43\x2d7902\x2d4062\x2dab49\x2daa1fc59bc60e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d981c481\x2d0c94\x2d45a9\x2d968b\x2d8135a8b5691e.mount has successfully entered the 'dead' state. Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085313650Z" level=info msg="runSandbox: deleting pod ID dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c from idIndex" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085339534Z" level=info msg="runSandbox: removing pod sandbox dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085315948Z" level=info msg="runSandbox: deleting pod ID f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad from idIndex" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085374535Z" level=info msg="runSandbox: removing pod sandbox f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085387169Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085397958Z" level=info msg="runSandbox: unmounting shmPath for sandbox f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085354224Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.085487994Z" level=info msg="runSandbox: unmounting shmPath for sandbox dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.097407838Z" level=info msg="runSandbox: removing pod sandbox from storage: dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.100579539Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.100597656Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=a6381971-7a5b-4582-9a3d-2a40ce475b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.100734 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.100781 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.100804 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.100856 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.105428994Z" level=info msg="runSandbox: removing pod sandbox from storage: f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.108699171Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:29.108717651Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=422a0576-a284-4b59-948a-5955ae44e0a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.108880 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.108913 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.108935 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:20:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:29.108974 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:20:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dc90a9df23ee46f21c6b75c3b69ad14ac4973475bfbdeff4d30abf913f43782c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f216185bb8fae8713967c2b5576b3977cb9f0e813113b41df205aa4f16dde5ad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:32.995555 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:32.995950555Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:32.995993865Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:33.006756825Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/bfef26a1-1713-4fe2-ad2f-0960e97ec335 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:33.006776264Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:35.995716 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:35.996065374Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:35.996105062Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:36.006757144Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ee9aed43-0722-4448-a321-376e055bb7ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:36.006783566Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:36.996105 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:20:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:36.996683 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:37.996198 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.996672356Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.996724392Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:37.997016 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997329528Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997362834Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:37.997352 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:37.997412 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997734954Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997754363Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997816087Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:37.997845924Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.007113624Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/6d36c14b-aaa4-4c3d-8a5f-bc407aea9650 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.007141328Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.028308206Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bb3b33ab-34d3-4ecf-8dfe-293140eea457 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.028335601Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.030179430Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/672124df-bcbd-47da-ac3a-07fd6d0d2d81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.030202674Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.030713868Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/3bc7528d-10c1-4879-942d-2aa5c6d5fc0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.030736559Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:38.513555 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-bcwzw] Jan 23 16:20:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:38.995852 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:20:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:38.995973 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.996249745Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.996285764Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.996364034Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:38.996391055Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.009846857Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/17e5e122-9a78-43f6-83f3-e08402d8cd28 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.009867184Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.010740426Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/a59af4ad-921c-4ff4-9d15-8491e22de1dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.010762333Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:39.995928 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:39.996128 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:39.996019 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996458106Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996491187Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996574506Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996618530Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996629913Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:39.996654953Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.015985355Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a3919ab4-b03c-4096-bb4a-ed80e5565bd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.016006098Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.017551687Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/daaee1e4-1b28-407f-ad80-1c93c5f55b80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.017574526Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.018181672Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6a93f17b-cf96-456d-97fb-c858db2e0437 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:40.018199714Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:42.995585 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:42.996005620Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:42.996058817Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:20:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:43.007342258Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/2857d4ff-b33b-499d-8169-ffd80157dec6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:20:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:43.007368540Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:20:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:20:51.996791 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:20:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:20:51.997326 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:20:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:20:58.148313005Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:21:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:03.996038 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:21:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:03.996576 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721607807Z" level=info msg="NetworkStart: stopping network for sandbox 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721624990Z" level=info msg="NetworkStart: stopping network for sandbox a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721758950Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/36ae9eb8-8f88-456c-82b6-fb4d7e8511c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721786440Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721794240Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721800783Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721821926Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a959a028-05e0-4a1b-8772-3a37fb251d3f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721846750Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721853775Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.721859909Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.722707451Z" level=info msg="NetworkStart: stopping network for sandbox f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.722814763Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c2f8eb26-2510-4422-a20c-4853c6837ad1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.722835032Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.722841318Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.722847700Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.725339189Z" level=info msg="NetworkStart: stopping network for sandbox b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.725447531Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a762ce1d-1011-414a-9ffd-eaeed9da0efd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.725466565Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.725473385Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.725479371Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.726128295Z" level=info msg="NetworkStart: stopping network for sandbox 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.726283573Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/09180d93-ac2b-446e-a32e-482ee43aea56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.726311396Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.726319288Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:06.726325797Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.839004855Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-bcwzw_openshift-multus_a316a3e8-6ad7-4240-b3a4-3752ec397ce5_0(b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69): error removing pod openshift-multus_cni-sysctl-allowlist-ds-bcwzw from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-bcwzw/a316a3e8-6ad7-4240-b3a4-3752ec397ce5]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.839041675Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount has successfully entered the 'dead' state. Jan 23 16:21:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount has successfully entered the 'dead' state. Jan 23 16:21:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a31daa7d\x2dd853\x2d4e54\x2daedf\x2da49ca6724db9.mount has successfully entered the 'dead' state. Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.889359853Z" level=info msg="runSandbox: deleting pod ID b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69 from idIndex" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.889384694Z" level=info msg="runSandbox: removing pod sandbox b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.889405408Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.889419074Z" level=info msg="runSandbox: unmounting shmPath for sandbox b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.909429675Z" level=info msg="runSandbox: removing pod sandbox from storage: b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.912688650Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-bcwzw_openshift-multus_a316a3e8-6ad7-4240-b3a4-3752ec397ce5_0" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:08.912708108Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-bcwzw_openshift-multus_a316a3e8-6ad7-4240-b3a4-3752ec397ce5_0" id=7b932dc5-d914-47e7-9ba0-37e37ff56d2c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:08.912919 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-bcwzw_openshift-multus_a316a3e8-6ad7-4240-b3a4-3752ec397ce5_0(b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69): error adding pod openshift-multus_cni-sysctl-allowlist-ds-bcwzw to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-bcwzw/a316a3e8-6ad7-4240-b3a4-3752ec397ce5]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:08.913088 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-bcwzw_openshift-multus_a316a3e8-6ad7-4240-b3a4-3752ec397ce5_0(b80e2f900ecbaa6e8241a9eeb4cf680044c49d58bc64992cd50f34fb1fc3ad69): error adding pod openshift-multus_cni-sysctl-allowlist-ds-bcwzw to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-bcwzw/a316a3e8-6ad7-4240-b3a4-3752ec397ce5]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-bcwzw" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.862721 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szpgq\" (UniqueName: \"kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq\") pod \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.862751 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir\") pod \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.862769 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist\") pod \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.862791 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready\") pod \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\" (UID: \"a316a3e8-6ad7-4240-b3a4-3752ec397ce5\") " Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.862873 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "a316a3e8-6ad7-4240-b3a4-3752ec397ce5" (UID: "a316a3e8-6ad7-4240-b3a4-3752ec397ce5"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:21:09.862970 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a316a3e8-6ad7-4240-b3a4-3752ec397ce5/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.863001 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready" (OuterVolumeSpecName: "ready") pod "a316a3e8-6ad7-4240-b3a4-3752ec397ce5" (UID: "a316a3e8-6ad7-4240-b3a4-3752ec397ce5"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:21:09.863012 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a316a3e8-6ad7-4240-b3a4-3752ec397ce5/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.863129 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "a316a3e8-6ad7-4240-b3a4-3752ec397ce5" (UID: "a316a3e8-6ad7-4240-b3a4-3752ec397ce5"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:21:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-a316a3e8\x2d6ad7\x2d4240\x2db3a4\x2d3752ec397ce5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszpgq.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-a316a3e8\x2d6ad7\x2d4240\x2db3a4\x2d3752ec397ce5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszpgq.mount has successfully entered the 'dead' state. Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.872709 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq" (OuterVolumeSpecName: "kube-api-access-szpgq") pod "a316a3e8-6ad7-4240-b3a4-3752ec397ce5" (UID: "a316a3e8-6ad7-4240-b3a4-3752ec397ce5"). InnerVolumeSpecName "kube-api-access-szpgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.963151 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.963170 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-szpgq\" (UniqueName: \"kubernetes.io/projected/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-kube-api-access-szpgq\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.963180 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:21:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:09.963188 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a316a3e8-6ad7-4240-b3a4-3752ec397ce5-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:21:10 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice. -- Subject: Unit kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice has finished shutting down. Jan 23 16:21:10 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-poda316a3e8_6ad7_4240_b3a4_3752ec397ce5.slice completed and consumed the indicated resources. Jan 23 16:21:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:11.456730 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-bcwzw] Jan 23 16:21:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:11.459221 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-bcwzw] Jan 23 16:21:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:11.999187 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a316a3e8-6ad7-4240-b3a4-3752ec397ce5 path="/var/lib/kubelet/pods/a316a3e8-6ad7-4240-b3a4-3752ec397ce5/volumes" Jan 23 16:21:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:16.996186 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:21:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:16.996695 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:18.019350689Z" level=info msg="NetworkStart: stopping network for sandbox a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:18.019685704Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/bfef26a1-1713-4fe2-ad2f-0960e97ec335 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:18.019710645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:18.019718262Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:18.019724635Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:21.020065115Z" level=info msg="NetworkStart: stopping network for sandbox 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:21.020213122Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ee9aed43-0722-4448-a321-376e055bb7ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:21.020236689Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:21.020243659Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:21.020250722Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:21 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00094|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.021391459Z" level=info msg="NetworkStart: stopping network for sandbox 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.021547611Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/6d36c14b-aaa4-4c3d-8a5f-bc407aea9650 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.021573773Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.021580757Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.021587161Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.040887593Z" level=info msg="NetworkStart: stopping network for sandbox 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.041000654Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bb3b33ab-34d3-4ecf-8dfe-293140eea457 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.041021389Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.041027713Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.041033816Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.043019129Z" level=info msg="NetworkStart: stopping network for sandbox 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.043160039Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/3bc7528d-10c1-4879-942d-2aa5c6d5fc0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.043185469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.043193335Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.043201058Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.044113418Z" level=info msg="NetworkStart: stopping network for sandbox 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.044229565Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/672124df-bcbd-47da-ac3a-07fd6d0d2d81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.044254328Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.044262140Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:23.044270486Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:23 hub-master-0.workload.bos2.lab sshd[25128]: Accepted publickey for core from 2600:52:7:18::11 port 38948 ssh2: ED25519 SHA256:51RsaYMAVDXjZ4ofvNlClwmCDL0eebyMyw8HOKcupS0 Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Created slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Starting User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd-logind[3052]: New session 3 of user core. -- Subject: A new session 3 has been created for user core -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 3 has been created for the user core. -- -- The leading process of the session is 25128. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Started User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Starting User Manager for UID 1000... -- Subject: Unit user@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: pam_unix(systemd-user:session): session opened for user core by (uid=0) Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting Create User's Volatile Files and Directories... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Podman auto-update timer. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on Podman API Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Started User Manager for UID 1000. -- Subject: Unit user@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[1]: Started Session 3 of user core. -- Subject: Unit session-3.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-3.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting A template for running K8s workloads via podman-play-kube... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting Podman auto-update service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting Podman API Service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:21:23 hub-master-0.workload.bos2.lab sshd[25128]: pam_unix(sshd:session): session opened for user core by (uid=0) Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Podman API Service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25181]: time="2023-01-23T16:21:23Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25178]: time="2023-01-23T16:21:23Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25313]: time="2023-01-23T16:21:23Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25249]: Error: open default: no such file or directory Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25313]: time="2023-01-23T16:21:23Z" level=info msg="Setting parallel job count to 337" Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25181]: time="2023-01-23T16:21:23Z" level=info msg="Setting parallel job count to 337" Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started podman-pause-a19b7921.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: podman-kube@default.service: Main process exited, code=exited, status=125/n/a Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: podman-kube@default.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Failed to start A template for running K8s workloads via podman-play-kube. -- Subject: Unit UNIT has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has failed. -- -- The result is failed. Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25343]: time="2023-01-23T16:21:23Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25343]: time="2023-01-23T16:21:23Z" level=info msg="Setting parallel job count to 337" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25343]: time="2023-01-23T16:21:23Z" level=info msg="Using systemd socket activation to determine API endpoint" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25343]: time="2023-01-23T16:21:23Z" level=info msg="API service listening on \"/run/user/1000/podman/podman.sock\". URI: \"/run/user/1000/podman/podman.sock\"" Jan 23 16:21:23 hub-master-0.workload.bos2.lab podman[25343]: time="2023-01-23T16:21:23Z" level=info msg="API service listening on \"/run/user/1000/podman/podman.sock\"" Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started podman-pause-0eef0e2e.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started podman-pause-f8e5da46.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Podman auto-update service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:21:23 hub-master-0.workload.bos2.lab systemd[25135]: Startup finished in 510ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 1000 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 510955 microseconds. Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.022615227Z" level=info msg="NetworkStart: stopping network for sandbox e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.023014490Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/17e5e122-9a78-43f6-83f3-e08402d8cd28 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.023040092Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.023047450Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.023054120Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.025745873Z" level=info msg="NetworkStart: stopping network for sandbox 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.025878204Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/a59af4ad-921c-4ff4-9d15-8491e22de1dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.025901725Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.025908038Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:24.025914387Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.029061232Z" level=info msg="NetworkStart: stopping network for sandbox 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.029193859Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6a93f17b-cf96-456d-97fb-c858db2e0437 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.029221063Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.029227705Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.029233453Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030117796Z" level=info msg="NetworkStart: stopping network for sandbox 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030239009Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/daaee1e4-1b28-407f-ad80-1c93c5f55b80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030262873Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030270602Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030278098Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030507304Z" level=info msg="NetworkStart: stopping network for sandbox 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030605060Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a3919ab4-b03c-4096-bb4a-ed80e5565bd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030624948Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030631378Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:25.030637175Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857566 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857818 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857826 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857833 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857843 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857853 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:27.857861 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.019760751Z" level=info msg="NetworkStart: stopping network for sandbox a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.020026359Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/2857d4ff-b33b-499d-8169-ffd80157dec6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.020053754Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.020060806Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.020067273Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:28.141615729Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:21:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:31.996726 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:21:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:31.997376 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490898.1207] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:21:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490898.1212] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:21:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490898.1213] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:21:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490898.1452] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:21:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674490898.1454] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:21:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:43.996896 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.997747162Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=f15fdccd-e054-4568-a42d-8f6a8ba8e5f9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.997933097Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f15fdccd-e054-4568-a42d-8f6a8ba8e5f9 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.998540483Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=587ed3bd-b81a-4a56-83e8-2f88e3ab098c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.998678137Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=587ed3bd-b81a-4a56-83e8-2f88e3ab098c name=/runtime.v1.ImageService/ImageStatus Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.999703796Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2b00a5fb-8e34-4103-a3ed-5e5a2a806af8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:21:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:43.999938785Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope. -- Subject: Unit crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333. -- Subject: Unit crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.108845507Z" level=info msg="Created container c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2b00a5fb-8e34-4103-a3ed-5e5a2a806af8 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.109295189Z" level=info msg="Starting container: c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" id=4bd619e5-b1b7-4cac-9054-b271f1a493b3 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.127668310Z" level=info msg="Started container" PID=25987 containerID=c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=4bd619e5-b1b7-4cac-9054-b271f1a493b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.131816940Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.142570709Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.142590899Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.142604264Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.152217639Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.152239864Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.152253593Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.160738971Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.160753950Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.160762732Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.169271940Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.169294611Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:21:44 hub-master-0.workload.bos2.lab conmon[25969]: conmon c865eeedc39931f72911 : container 25987 exited with status 1 Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has successfully entered the 'dead' state. Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope: Consumed 570ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope completed and consumed the indicated resources. Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope has successfully entered the 'dead' state. Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope: Consumed 48ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333.scope completed and consumed the indicated resources. Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.832120 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/180.log" Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.832682 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/179.log" Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.833434 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" exitCode=1 Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.833457 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333} Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.833477 8631 scope.go:115] "RemoveContainer" containerID="8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.834340467Z" level=info msg="Removing container: 8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983" id=9723885a-078e-4297-8a8e-50a4362dfa40 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:44.834364 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:44.834887 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:44 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-2cd9a3e5eb07a1122e9313507b9e5992031537c4a41a432879bbc1d1268d55af-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-2cd9a3e5eb07a1122e9313507b9e5992031537c4a41a432879bbc1d1268d55af-merged.mount has successfully entered the 'dead' state. Jan 23 16:21:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:44.881336458Z" level=info msg="Removed container 8db07ceee8f9d189ad62aecbfca640b5a1f8feb4fe6cd1ae63d8e5d99fecf983: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9723885a-078e-4297-8a8e-50a4362dfa40 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:21:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:45.668138 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:21:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:45.837609 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/180.log" Jan 23 16:21:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:45.839240 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:21:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:45.839715 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.733767463Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.733969169Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.734031868Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.734066959Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.734037145Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.734140784Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.736092042Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.736119812Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.737154462Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.737196396Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount has successfully entered the 'dead' state. Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.782448881Z" level=info msg="runSandbox: deleting pod ID b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3 from idIndex" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.782475227Z" level=info msg="runSandbox: removing pod sandbox b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.782497118Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.782510658Z" level=info msg="runSandbox: unmounting shmPath for sandbox b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.783312570Z" level=info msg="runSandbox: deleting pod ID 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe from idIndex" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.783337604Z" level=info msg="runSandbox: removing pod sandbox 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.783350271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.783362168Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.784318630Z" level=info msg="runSandbox: deleting pod ID a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016 from idIndex" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.784343425Z" level=info msg="runSandbox: removing pod sandbox a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.784357336Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.784370757Z" level=info msg="runSandbox: unmounting shmPath for sandbox a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.787308019Z" level=info msg="runSandbox: deleting pod ID 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912 from idIndex" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.787339595Z" level=info msg="runSandbox: removing pod sandbox 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.787354656Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.787370588Z" level=info msg="runSandbox: unmounting shmPath for sandbox 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.788292461Z" level=info msg="runSandbox: deleting pod ID f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553 from idIndex" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.788321086Z" level=info msg="runSandbox: removing pod sandbox f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.788334119Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.788345416Z" level=info msg="runSandbox: unmounting shmPath for sandbox f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.795472300Z" level=info msg="runSandbox: removing pod sandbox from storage: b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.798973567Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.798991047Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2028fca2-ca5d-494f-9489-3c49022b89da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.799288 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.799418 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.799441 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.799486 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.803414922Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.804432013Z" level=info msg="runSandbox: removing pod sandbox from storage: a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.806778082Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.806797344Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ec3205da-ae64-411f-a5b1-c4507d69d973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.807006 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.807049 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.807071 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.807118 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.810403523Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.810425686Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=25e2f272-e1bc-4110-b515-c97b889956ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.810655 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.810688 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.810710 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.810746 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.811478847Z" level=info msg="runSandbox: removing pod sandbox from storage: 19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.812491953Z" level=info msg="runSandbox: removing pod sandbox from storage: f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.814736051Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.814755678Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=2ac60aa4-94c0-4edc-bd2a-59a5ad3c948b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.815037 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.815074 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.815097 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.815138 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.817903255Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.817923266Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=24458c5a-0a98-4bde-8bd5-3e7b073f29e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.818026 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.818059 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.818081 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:51.818119 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:51.850082 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:51.850173 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:51.850261 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850444366Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850477191Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:51.850479 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:21:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:51.850516 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850570468Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850607325Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850663144Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850693585Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850758800Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850788393Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850833726Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.850857994Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.877201451Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ba4f0dd6-22ee-4c41-a7eb-8bbeae14dcf6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.877231455Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.877523655Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5bbca284-2a82-4cae-b101-d9ccad290f8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.877542508Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.882649265Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/4f9e27a6-713f-4cf1-977b-192eb673e87b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.882675395Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.883021572Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/30b2f192-9ddb-4ffd-942a-0af2e1ba2c0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.883042844Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.886565511Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/81dd69c4-7e77-4a93-b2bc-8983d9720e37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:21:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:51.886590446Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a762ce1d\x2d1011\x2d414a\x2d9ffd\x2deaeed9da0efd.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-09180d93\x2dac2b\x2d446e\x2da32e\x2d482ee43aea56.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c2f8eb26\x2d2510\x2d4422\x2da20c\x2d4853c6837ad1.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a959a028\x2d05e0\x2d4a1b\x2d8772\x2d3a37fb251d3f.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b9ee168d77406995c13e24641e4398ba8e08112ca127838dd3a1665191e831d3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-36ae9eb8\x2d8f88\x2d456c\x2d82b6\x2dfb4d7e8511c9.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-19a13702a36507b30c49ac79fa3b6c1264b676baba03801424bd676d15090912-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f65b66702d9a7ed703b181ed13c854a158d09c1d08b6d783cdbf9060a66f2553-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a53324ba50948bcedce1047a73ee38998b35584073b44b8099d5ddba8a0dc016-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ec75a37beaec9822aed9969b236de824a873c22db80359bacc731d21a86d9fe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:21:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:21:57.996793 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:21:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:21:57.997296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:21:58.142385165Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.031645919Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.031688003Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount has successfully entered the 'dead' state. Jan 23 16:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount has successfully entered the 'dead' state. Jan 23 16:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bfef26a1\x2d1713\x2d4fe2\x2dad2f\x2d0960e97ec335.mount has successfully entered the 'dead' state. Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.072343698Z" level=info msg="runSandbox: deleting pod ID a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d from idIndex" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.072371453Z" level=info msg="runSandbox: removing pod sandbox a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.072392800Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.072410448Z" level=info msg="runSandbox: unmounting shmPath for sandbox a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.084446373Z" level=info msg="runSandbox: removing pod sandbox from storage: a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.087374864Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:03.087393681Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=f90a13fe-f995-4bca-8964-8d1756560b80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:03.087537 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:03.087706 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:03.087729 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:03.087777 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a732cd961109824bb58b56e07629c479203cfd3f4508397e999ada4b83d37f0d): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.031397211Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.031435631Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount has successfully entered the 'dead' state. Jan 23 16:22:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount has successfully entered the 'dead' state. Jan 23 16:22:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee9aed43\x2d0722\x2d4448\x2da321\x2d376e055bb7ba.mount has successfully entered the 'dead' state. Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.072311094Z" level=info msg="runSandbox: deleting pod ID 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1 from idIndex" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.072336856Z" level=info msg="runSandbox: removing pod sandbox 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.072352282Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.072364050Z" level=info msg="runSandbox: unmounting shmPath for sandbox 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.084483981Z" level=info msg="runSandbox: removing pod sandbox from storage: 11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.087644294Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:06.087662347Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=92c8945c-36d8-4a6f-8540-2abe8228211f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:06.087856 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:06.087896 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:22:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:06.087918 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:22:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:06.087961 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(11c000ef894c3d0858746f02c066ed5146acbf19425f97f1383fcba6624e75f1): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.033159236Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.033219577Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount has successfully entered the 'dead' state. Jan 23 16:22:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount has successfully entered the 'dead' state. Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.050440467Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.050474569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.053518482Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.053556744Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount has successfully entered the 'dead' state. Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.055158071Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.055229175Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount has successfully entered the 'dead' state. Jan 23 16:22:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount has successfully entered the 'dead' state. Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.082314494Z" level=info msg="runSandbox: deleting pod ID 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae from idIndex" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.082341154Z" level=info msg="runSandbox: removing pod sandbox 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.082356177Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.082370666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.098506864Z" level=info msg="runSandbox: removing pod sandbox from storage: 24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.101802371Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.101821079Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c93c52eb-cc30-4073-a73c-a369658befb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.102020 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.102062 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.102084 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.102131 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.102324863Z" level=info msg="runSandbox: deleting pod ID 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b from idIndex" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.102351935Z" level=info msg="runSandbox: removing pod sandbox 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.102368195Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.102384069Z" level=info msg="runSandbox: unmounting shmPath for sandbox 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114316581Z" level=info msg="runSandbox: deleting pod ID 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2 from idIndex" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114344622Z" level=info msg="runSandbox: removing pod sandbox 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114358184Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114369926Z" level=info msg="runSandbox: unmounting shmPath for sandbox 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114386782Z" level=info msg="runSandbox: deleting pod ID 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1 from idIndex" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114414342Z" level=info msg="runSandbox: removing pod sandbox 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114429411Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.114442628Z" level=info msg="runSandbox: unmounting shmPath for sandbox 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.118446000Z" level=info msg="runSandbox: removing pod sandbox from storage: 64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.122016450Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.122037348Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=620a3856-82db-45a1-ace4-bee93070c67a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.122243 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.122282 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.122306 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.122349 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.126436942Z" level=info msg="runSandbox: removing pod sandbox from storage: 658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.129614879Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.129632836Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=f8229e68-d3db-4944-b889-f2c075916f3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.129795 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.129827 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.129847 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.129884 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.130424945Z" level=info msg="runSandbox: removing pod sandbox from storage: 31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.133402800Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:08.133419449Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=8ab0bb0b-468f-44a3-9697-1a72562a22ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.133585 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.133618 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.133646 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:22:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:08.133686 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.034279850Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.034317531Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.037048397Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.037078938Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3bc7528d\x2d10c1\x2d4879\x2d942d\x2d2aa5c6d5fc0b.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-672124df\x2dbcbd\x2d47da\x2dac3a\x2d07fd6d0d2d81.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bb3b33ab\x2d34d3\x2d4ecf\x2d8dfe\x2d293140eea457.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-31e292e5aadab9d1db683a3f9605dc5e632e9138702efc3bde0b798e4e6323b2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-64e8b211bafdbfa6fe9196b8cda3ff931aac234e090d52c6d6baeab45078c91b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-658af2bde03d184cd5fb3abc22ca8925c23ec4c66b987586ff350b0aad0b1eb1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d36c14b\x2daaa4\x2d4c3d\x2d8a5f\x2dbc407aea9650.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-24ced5b6f6a0d40cb86297cfb79e9c3160d0ab416db2bcb4885a33490d99f0ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a59af4ad\x2d921c\x2d4ff4\x2d9d15\x2d8491e22de1dc.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-17e5e122\x2d9a78\x2d43f6\x2d83f3\x2de08402d8cd28.mount has successfully entered the 'dead' state. Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077332649Z" level=info msg="runSandbox: deleting pod ID 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d from idIndex" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077361363Z" level=info msg="runSandbox: removing pod sandbox 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077334161Z" level=info msg="runSandbox: deleting pod ID e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba from idIndex" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077404037Z" level=info msg="runSandbox: removing pod sandbox e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077421742Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077436454Z" level=info msg="runSandbox: unmounting shmPath for sandbox e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077375705Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.077495588Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.089456002Z" level=info msg="runSandbox: removing pod sandbox from storage: 8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.092725097Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.092742867Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0a9f1a0b-a6df-4003-9b17-2b13ca80598d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.093000 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.093043 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.093065 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.093112 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.097449813Z" level=info msg="runSandbox: removing pod sandbox from storage: e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.100660616Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:09.100678914Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5f904d0b-ae63-464a-a8ea-fc45ed302402 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.100850 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.100883 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.100905 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:09.100940 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8f5ef752f59a3de6f0765820b6da9722f97bba9cd6990b89101bce4cf1ccd40d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e52d76558bb1a34b28eea07f62cb9ee49a6901eafc65597b333d53ffb8b327ba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040281225Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040322513Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040284673Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040377243Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040787895Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.040814777Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount has successfully entered the 'dead' state. Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076321631Z" level=info msg="runSandbox: deleting pod ID 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5 from idIndex" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076349563Z" level=info msg="runSandbox: removing pod sandbox 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076364497Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076377748Z" level=info msg="runSandbox: unmounting shmPath for sandbox 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076322675Z" level=info msg="runSandbox: deleting pod ID 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759 from idIndex" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076444381Z" level=info msg="runSandbox: removing pod sandbox 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076458298Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076469947Z" level=info msg="runSandbox: unmounting shmPath for sandbox 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076326521Z" level=info msg="runSandbox: deleting pod ID 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b from idIndex" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076515727Z" level=info msg="runSandbox: removing pod sandbox 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076529772Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.076543694Z" level=info msg="runSandbox: unmounting shmPath for sandbox 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.088469991Z" level=info msg="runSandbox: removing pod sandbox from storage: 77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.088494661Z" level=info msg="runSandbox: removing pod sandbox from storage: 071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.088472306Z" level=info msg="runSandbox: removing pod sandbox from storage: 65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.091663136Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.091870227Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0f86274d-3558-4c41-b791-5b6b0a655f92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.092056 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.092100 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.092123 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.092173 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.094944504Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.094966705Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=bc6a93e0-7615-4e3d-aa93-36bf5e6e79c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.095141 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.095169 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.095188 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.095231 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.097910662Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:10.097928287Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=2340c748-533d-4d89-9df2-6e34b7618cd1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.098153 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.098199 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.098227 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:22:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:10.098265 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6a93f17b\x2dcf96\x2d456d\x2d97fb\x2dc858db2e0437.mount has successfully entered the 'dead' state. Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-daaee1e4\x2d1b28\x2d407f\x2dad80\x2d1c93c5f55b80.mount has successfully entered the 'dead' state. Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a3919ab4\x2db03c\x2d4096\x2dbb4a\x2ded80e5565bd9.mount has successfully entered the 'dead' state. Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-65ea82e8602a2a3c7aa92e2e3934c443e41ff9b4b634f121fe7ecd96e8e53bf5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-071a61e8c6c82eb0dbfa690db8ded570594b28686518d02eed60e6e918ae1759-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-77ea0cd45909bd571b1f66ce9c74f3291fc149eefd0264170cf9d9a1c921a59b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:12.996599 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:22:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:12.997122 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.031196158Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.031250962Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount has successfully entered the 'dead' state. Jan 23 16:22:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount has successfully entered the 'dead' state. Jan 23 16:22:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2857d4ff\x2db33b\x2d499d\x2d8169\x2dffd80157dec6.mount has successfully entered the 'dead' state. Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.064308226Z" level=info msg="runSandbox: deleting pod ID a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909 from idIndex" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.064336280Z" level=info msg="runSandbox: removing pod sandbox a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.064352929Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.064376364Z" level=info msg="runSandbox: unmounting shmPath for sandbox a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.076424786Z" level=info msg="runSandbox: removing pod sandbox from storage: a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.079719295Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.079738703Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=260a75ca-5a84-40ae-b9ac-f9202947e500 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:13.079993 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:22:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:13.080033 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:13.080059 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:13.080109 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a700981e1ed45808effe3188d2197fe8ff13b46af1468da8208aca12276c8909): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:22:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:13.995820 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.996141020Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:13.996185951Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:14.010731752Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/7050bffb-364f-4c3c-9e7d-83b49f22113c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:14.010760579Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:18.996066 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:22:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:18.996370738Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:18.996409856Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:19.007448429Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/994cc594-ea54-4b78-b7bc-47bdb149499a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:19.007471044Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:20.996289 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:20.996410 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:20.996614 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:20.996730 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996724378Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996781355Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996751498Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996842383Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996870704Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996852785Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.996912145Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:20.997026238Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.019523320Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/318cb451-3908-4904-8aa0-4498ba5b8a06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.019543074Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.020969062Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a47c4098-d4ad-44eb-9224-594872a1e486 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.020989955Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.021768155Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bf613e61-1b1a-4243-b4f6-bb75ea94f447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.021788432Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.022253242Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f826a946-4671-487d-a976-34a5373f6164 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.022270892Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:21.995804 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:21.995949 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:21.996090 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996438859Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996491815Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996553063Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996587357Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996653717Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:21.996695831Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.018825190Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9c9f9171-c84f-41b6-9c9b-a8874a09b8c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.018857777Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.021694426Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/50eb3ac2-c442-4463-ab3c-d3a4e6c44f57 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.021717602Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.022501500Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b3af4355-192e-4049-a22c-cadb8d3d7347 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:22.022520976Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:23.996312 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:23.996445 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:23.996762 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.996700623Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.996754390Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.996773568Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.996813848Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.997109065Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:23.997147473Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:23.997255 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:23.997756 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.016524795Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/4e4db943-11a2-4a37-bdcf-41c1be07aac4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.016547451Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.016677114Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e6b9de60-ce2f-41c3-9776-84d35a7a1a0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.016697452Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.018935382Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d6cc9866-e258-4e44-8738-1903e3b8aecc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:24.018957087Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858240 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858262 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858268 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858274 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858280 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858288 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:27.858293 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:22:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:28.141612306Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890715484Z" level=info msg="NetworkStart: stopping network for sandbox 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890882406Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5bbca284-2a82-4cae-b101-d9ccad290f8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890904327Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890911093Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890920341Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.890960006Z" level=info msg="NetworkStart: stopping network for sandbox 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.891140154Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ba4f0dd6-22ee-4c41-a7eb-8bbeae14dcf6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.891173155Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.891181152Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.891188846Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896041161Z" level=info msg="NetworkStart: stopping network for sandbox ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896180483Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/4f9e27a6-713f-4cf1-977b-192eb673e87b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896211113Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896219955Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896227746Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896225577Z" level=info msg="NetworkStart: stopping network for sandbox 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896389579Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/30b2f192-9ddb-4ffd-942a-0af2e1ba2c0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896416132Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896423562Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.896429481Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.899829591Z" level=info msg="NetworkStart: stopping network for sandbox 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.899954495Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/81dd69c4-7e77-4a93-b2bc-8983d9720e37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.899977554Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.899985417Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:36.899991542Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:22:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:38.997064 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:22:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:39.001455 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:22:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:22:49.997132 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:22:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:22:49.997676 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:22:56 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00095|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (5 adds, 5 deletes) Jan 23 16:22:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:58.143302463Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:22:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:59.023728291Z" level=info msg="NetworkStart: stopping network for sandbox a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:22:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:59.023876155Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/7050bffb-364f-4c3c-9e7d-83b49f22113c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:22:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:59.023901360Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:22:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:59.023909731Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:22:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:22:59.023916781Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:01.996617 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:23:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:01.997158 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:23:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:04.021912275Z" level=info msg="NetworkStart: stopping network for sandbox c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:04.022057074Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/994cc594-ea54-4b78-b7bc-47bdb149499a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:04.022079488Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:04.022086227Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:04.022092634Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032196991Z" level=info msg="NetworkStart: stopping network for sandbox 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032405032Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/318cb451-3908-4904-8aa0-4498ba5b8a06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032427603Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032434216Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032440156Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.032970251Z" level=info msg="NetworkStart: stopping network for sandbox f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.033093778Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a47c4098-d4ad-44eb-9224-594872a1e486 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.033116467Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.033124942Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.033131819Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035535407Z" level=info msg="NetworkStart: stopping network for sandbox 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035644834Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f826a946-4671-487d-a976-34a5373f6164 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035668506Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035674981Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035681284Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.035953131Z" level=info msg="NetworkStart: stopping network for sandbox 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.036067228Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bf613e61-1b1a-4243-b4f6-bb75ea94f447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.036089652Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.036096449Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:06.036102307Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.032757786Z" level=info msg="NetworkStart: stopping network for sandbox 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.032888007Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9c9f9171-c84f-41b6-9c9b-a8874a09b8c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.032910262Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.032916682Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.032922526Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.033252917Z" level=info msg="NetworkStart: stopping network for sandbox 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.033362189Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/50eb3ac2-c442-4463-ab3c-d3a4e6c44f57 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.033383639Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.033390976Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.033397135Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.035598687Z" level=info msg="NetworkStart: stopping network for sandbox 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.035701823Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b3af4355-192e-4049-a22c-cadb8d3d7347 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.035720018Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.035726401Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:07.035733506Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031365574Z" level=info msg="NetworkStart: stopping network for sandbox f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031517541Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e6b9de60-ce2f-41c3-9776-84d35a7a1a0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031543510Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031551592Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031557898Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.031897420Z" level=info msg="NetworkStart: stopping network for sandbox d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032025625Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d6cc9866-e258-4e44-8738-1903e3b8aecc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032047876Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032054352Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032060186Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032150756Z" level=info msg="NetworkStart: stopping network for sandbox 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032311583Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/4e4db943-11a2-4a37-bdcf-41c1be07aac4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032342172Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032352576Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:23:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:09.032359387Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:12.997090 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:23:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:12.997769 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.902422336Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.902463337Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.903021862Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.903067404Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.907001059Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.907035014Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.907238212Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.907272000Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.911280896Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.911315798Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount has successfully entered the 'dead' state. Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959350738Z" level=info msg="runSandbox: deleting pod ID 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7 from idIndex" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959387145Z" level=info msg="runSandbox: removing pod sandbox 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959407305Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959425366Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959365387Z" level=info msg="runSandbox: deleting pod ID ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e from idIndex" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959469755Z" level=info msg="runSandbox: removing pod sandbox ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959489729Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.959504953Z" level=info msg="runSandbox: unmounting shmPath for sandbox ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.960416592Z" level=info msg="runSandbox: deleting pod ID 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d from idIndex" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.960446153Z" level=info msg="runSandbox: removing pod sandbox 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.960459459Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.960471982Z" level=info msg="runSandbox: unmounting shmPath for sandbox 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.963278465Z" level=info msg="runSandbox: deleting pod ID 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75 from idIndex" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.963303764Z" level=info msg="runSandbox: removing pod sandbox 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.963323865Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.963337191Z" level=info msg="runSandbox: unmounting shmPath for sandbox 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.965274847Z" level=info msg="runSandbox: deleting pod ID 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d from idIndex" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.965306199Z" level=info msg="runSandbox: removing pod sandbox 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.965319903Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.965333078Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.967524221Z" level=info msg="runSandbox: removing pod sandbox from storage: 54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.970727071Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.970746956Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d52093a9-40ff-46b9-8bb0-92066ea4d2db name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.971014 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.971075 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.971100 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.971155 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.975404409Z" level=info msg="runSandbox: removing pod sandbox from storage: 1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.975489455Z" level=info msg="runSandbox: removing pod sandbox from storage: ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.978779109Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.978800877Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=72ab3fc2-da75-491f-9681-d68b0858d89a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.979055 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.979095 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.979122 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.979167 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.979707501Z" level=info msg="runSandbox: removing pod sandbox from storage: 3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.981730029Z" level=info msg="runSandbox: removing pod sandbox from storage: 699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.986118289Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.986139849Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=42590295-524a-4a93-ab72-2d2ccec8e0f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.986283 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.986322 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.986346 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.986392 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.989772666Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.989792612Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=1a893b76-dd0e-46ab-bdab-bb389779cb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.990003 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.990041 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.990065 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.990105 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.992849859Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:21.992870647Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=99d13f9f-1d02-4668-86b6-a72c6184a767 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.993040 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.993084 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.993106 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:23:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:21.993149 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:22.026355 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:22.026567 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:22.026594 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.026787410Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:22.026806 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.026821926Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:22.026876 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.026934114Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.026963842Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027050389Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027067135Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027176079Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027201235Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027242869Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.027258380Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.053589731Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2ccd388e-4dd6-43d5-95a0-b0a02edcc6c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.053765565Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.054119919Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f1342d85-d490-488d-9c9a-cb6d265c343e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.054139416Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.055267865Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/ca7188ee-0382-42db-b97a-37c7187cc520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.055289708Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.055877196Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/883b6a86-d379-4d4b-b7a7-71492754f54d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.055894932Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.056780574Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ef15b860-f912-4bc5-8be8-a0811e3946d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:22.056798093Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-81dd69c4\x2d7e77\x2d4a93\x2db2bc\x2d8983d9720e37.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-30b2f192\x2d9ddb\x2d4ffd\x2d942a\x2d0af2e1ba2c0b.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4f9e27a6\x2d713f\x2d4cf1\x2d977b\x2d192eb673e87b.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3519c3c588440efa760e855660769a5abaf541730a0ef7870865fea707f2d62d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5bbca284\x2d2a82\x2d4cae\x2db101\x2dd9ccad290f8d.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ba4f0dd6\x2d22ee\x2d4c41\x2da7eb\x2d8bbeae14dcf6.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1a2849eef466de5bcfc7aceffe96dfb20bf6d1b30e5731f393d29a98fe9e4ea7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-699a96e02dd3f075dc3af35fd46f47e606150025db730a3fc9c2148d81d5df75-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ba2361729634e0c32963cdc657b6bdddc4c1aa13b91df7859dc380d84841884e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-54dc60a58e749d30023df41622098669d4f6018ecd08d595f2d97dea9f99f12d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:24.996253 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:23:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:24.996754 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858680 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858719 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858726 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858733 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858739 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858747 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:27.858753 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:28.142216168Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:23:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:35.996985 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:23:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:35.997504 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.035808969Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.036009217Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount has successfully entered the 'dead' state. Jan 23 16:23:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount has successfully entered the 'dead' state. Jan 23 16:23:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7050bffb\x2d364f\x2d4c3c\x2d9e7d\x2d83b49f22113c.mount has successfully entered the 'dead' state. Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.076345942Z" level=info msg="runSandbox: deleting pod ID a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6 from idIndex" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.076377457Z" level=info msg="runSandbox: removing pod sandbox a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.076392494Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.076407778Z" level=info msg="runSandbox: unmounting shmPath for sandbox a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.092429722Z" level=info msg="runSandbox: removing pod sandbox from storage: a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.095231130Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:44.095251447Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b78640cc-9f84-4ca2-b106-05c6faf2a1a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:44.095445 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:44.095494 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:44.095518 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:44.095567 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:23:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a159b23ca503f127ca1bc9ff41dea5f953349acf851732dce283bf3afc1070b6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:46.996932 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:23:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:46.997479 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.034272439Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.034318824Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount has successfully entered the 'dead' state. Jan 23 16:23:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount has successfully entered the 'dead' state. Jan 23 16:23:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-994cc594\x2dea54\x2d4b78\x2db7bc\x2d47bdb149499a.mount has successfully entered the 'dead' state. Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.071305139Z" level=info msg="runSandbox: deleting pod ID c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac from idIndex" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.071330193Z" level=info msg="runSandbox: removing pod sandbox c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.071349986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.071362159Z" level=info msg="runSandbox: unmounting shmPath for sandbox c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.092462881Z" level=info msg="runSandbox: removing pod sandbox from storage: c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.095673892Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:49.095693005Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=6eceba53-dd99-46c2-bdaa-5aabd4889477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:49.095897 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:49.095943 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:23:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:49.095965 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:23:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:49.096012 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c0cce09bd61d4dfe73d8837fb83dbe0546789add19ab7edcad471352f1ce03ac): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.043327562Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.043370204Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.043789271Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.043825084Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.046034609Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.046071520Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.047049249Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.047077128Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.088279626Z" level=info msg="runSandbox: deleting pod ID f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052 from idIndex" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.088304609Z" level=info msg="runSandbox: removing pod sandbox f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.088319102Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.088330679Z" level=info msg="runSandbox: unmounting shmPath for sandbox f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096327475Z" level=info msg="runSandbox: deleting pod ID 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5 from idIndex" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096355102Z" level=info msg="runSandbox: removing pod sandbox 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096367765Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096379542Z" level=info msg="runSandbox: unmounting shmPath for sandbox 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096327880Z" level=info msg="runSandbox: deleting pod ID 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193 from idIndex" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096435179Z" level=info msg="runSandbox: removing pod sandbox 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096448184Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.096460650Z" level=info msg="runSandbox: unmounting shmPath for sandbox 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.097317382Z" level=info msg="runSandbox: deleting pod ID 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518 from idIndex" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.097347101Z" level=info msg="runSandbox: removing pod sandbox 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.097361519Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.097375053Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.113426906Z" level=info msg="runSandbox: removing pod sandbox from storage: f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.116619119Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.116637169Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=ae1fa8d0-6edf-4542-8647-a1bf24427260 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.116823 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.116864 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.116887 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.116934 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.120438723Z" level=info msg="runSandbox: removing pod sandbox from storage: 59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.120531627Z" level=info msg="runSandbox: removing pod sandbox from storage: 1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.121434340Z" level=info msg="runSandbox: removing pod sandbox from storage: 65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.123703719Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.123722255Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=0be74718-7df3-4120-a895-e147cdf9fbb3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.123942 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.123980 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.124004 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.124052 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.126776632Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.126795365Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=3c430563-732a-4d04-80c9-f6889b776982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.127065 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.127096 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.127116 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.127153 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.129812628Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:51.129830302Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=65c2eb8a-ab73-4096-b5ef-83bf5e96c3d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.130036 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.130076 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.130097 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:23:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:51.130131 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f826a946\x2d4671\x2d487d\x2da976\x2d34a5373f6164.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bf613e61\x2d1b1a\x2d4243\x2db4f6\x2dbb75ea94f447.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a47c4098\x2dd4ad\x2d44eb\x2d9224\x2d594872a1e486.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-318cb451\x2d3908\x2d4904\x2d8aa0\x2d4498ba5b8a06.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f731c40d76b2cde8be946cb8f602c2bee864adf2d1b6a42e29c62fc67ff7e052-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-65fb7d90c674aed4f34a96e58e2566a40ab5f332242ebc13fedf1424b3380193-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-59f8136acb0885c93b6474fc957668190034b864724e2a64de451b99e7a69df5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1fb5d96c4cfbab392ca7c4031a9616d2b25558bf562fc1652a155c5e67d17518-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.043301937Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.043339602Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.043751243Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.043779046Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.045595834Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.045618631Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083325468Z" level=info msg="runSandbox: deleting pod ID 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db from idIndex" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083361937Z" level=info msg="runSandbox: removing pod sandbox 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083385756Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083401119Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083325514Z" level=info msg="runSandbox: deleting pod ID 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1 from idIndex" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083440296Z" level=info msg="runSandbox: removing pod sandbox 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083456754Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.083471504Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.091297351Z" level=info msg="runSandbox: deleting pod ID 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616 from idIndex" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.091320850Z" level=info msg="runSandbox: removing pod sandbox 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.091332237Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.091344953Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.099528607Z" level=info msg="runSandbox: removing pod sandbox from storage: 9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.099531984Z" level=info msg="runSandbox: removing pod sandbox from storage: 61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.103222707Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.103244863Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=774c305d-38b3-42be-8e9a-2fd23d83384f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.103435 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.103481 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.103508 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.103535395Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.103564 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.106686738Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.106706492Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=01ed8fe1-445b-40ea-920f-280dcccb6093 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.106887 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.106921 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.106943 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.106981 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.109636817Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:52.109656107Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=0fca8e55-b012-49a5-a9d3-9dc9b05a2f35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.109874 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.109910 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.109932 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:52.109968 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b3af4355\x2d192e\x2d4049\x2da22c\x2dcadb8d3d7347.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-50eb3ac2\x2dc442\x2d4463\x2dab3c\x2dd3a4e6c44f57.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9c9f9171\x2dc84f\x2d41b6\x2d9c9b\x2da8874a09b8c3.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9552e7de5dd6aa88a0dd311a7cc6bb856c80fdb1d1aa0c6dcd497db307ff7fe1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3d2c40ac9990cfec4645d0b189f3d1cad9b686ccf4c408847043ff69aa7e0616-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-61ff2e5653e399f9c6c1ee1eb937444707e3468b3a800a5e426620a2f2ae36db-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.042278811Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.042326997Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.042796310Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.042840111Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.043075798Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.043108307Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093342480Z" level=info msg="runSandbox: deleting pod ID 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678 from idIndex" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093370385Z" level=info msg="runSandbox: removing pod sandbox 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093387133Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093401897Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093451119Z" level=info msg="runSandbox: deleting pod ID f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321 from idIndex" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093482531Z" level=info msg="runSandbox: removing pod sandbox f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093498792Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.093511851Z" level=info msg="runSandbox: unmounting shmPath for sandbox f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.102298659Z" level=info msg="runSandbox: deleting pod ID d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b from idIndex" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.102324963Z" level=info msg="runSandbox: removing pod sandbox d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.102344391Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.102356255Z" level=info msg="runSandbox: unmounting shmPath for sandbox d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.106415095Z" level=info msg="runSandbox: removing pod sandbox from storage: f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.109661329Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.109681374Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=27765318-f689-49b1-8f4f-e932333099a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.109931 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.110089 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.110111 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.110158 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.110438075Z" level=info msg="runSandbox: removing pod sandbox from storage: 7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.117191611Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.117221345Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=260bfa4d-9bea-4497-9c21-d3eb591e706f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.117447 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.117485 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.117509 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.117553 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.118457903Z" level=info msg="runSandbox: removing pod sandbox from storage: d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.121658193Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:54.121677057Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=92b32079-edb7-464f-a121-9c722b68f2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.121780 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.121813 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.121840 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:23:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:23:54.121879 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d6cc9866\x2de258\x2d4e44\x2d8738\x2d1903e3b8aecc.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e6b9de60\x2dce2f\x2d41c3\x2d9776\x2d84d35a7a1a0f.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e4db943\x2d11a2\x2d4a37\x2dbdcf\x2d41c1be07aac4.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f1eadf6951907495b65ac5da756271b34af9b1841efe03bfdd100f6b2f903321-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d15428c4aff8383c03d041ea6ded2b22f8a6e93bd8df1a5ad897ce2bd1f8b58b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7ce31a2b6e0142eb4a5a4dbe8fadf2b328136a7f3ad3fd805aa864e1b57c3678-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:23:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:55.995785 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:23:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:55.996162910Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:55.996227079Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:23:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:56.007884662Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/7a0aaa00-74d8-41c2-a833-db805ac53709 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:23:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:56.007915812Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:58.143333000Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:23:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:23:59.995733 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:23:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:59.996096337Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:23:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:23:59.996137834Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:00.007292024Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c5e50671-5cd7-4e86-91c6-0958b9312d43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:00.007441780Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:00.996290 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:24:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:00.996800 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:24:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:03.996033 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:03.996396632Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:03.996454847Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.008087659Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dae02c7e-1303-4025-a0e2-97d4d0fdd9ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.008124323Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:04.995633 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:24:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:04.995730 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:24:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:04.995869 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.995991834Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.996043250Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.996064817Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.996105225Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.996174017Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:04.996222032Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.017195607Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/3c08fdf2-0b1a-462f-8778-f489f723294f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.017230360Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.017750794Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/0e1ac1dc-00fb-40fb-a7b4-689c5ad1cf67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.017773433Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.020393588Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ebc4e1df-f04c-4693-9a36-865bf4aedcd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.020414596Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:05.995676 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:05.995783 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:05.995920 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:05.995952 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996011141Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996055594Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996113385Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996145948Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996162325Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996190464Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996244246Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:05.996291619Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.018617627Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/c6f73427-2e7b-4ef5-b806-407a9b56142d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.018642699Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.019312868Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/2a2b2a19-3ea1-42b3-b1ae-9c01f8a3a15b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.019333391Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.020257955Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/641ff746-d7dc-473d-9386-be595a13e00b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.020279571Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.021531308Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/7bd0c57f-aab2-4637-a379-c6101e36b05e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.021554794Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:06.995572 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.996086431Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:06.996140239Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.006810576Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2ec9dc9c-9484-4f35-9545-3ed781ec9cd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.006830709Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.066474629Z" level=info msg="NetworkStart: stopping network for sandbox 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.066624227Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2ccd388e-4dd6-43d5-95a0-b0a02edcc6c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.066647824Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.066654874Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.066662145Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068171957Z" level=info msg="NetworkStart: stopping network for sandbox c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068256250Z" level=info msg="NetworkStart: stopping network for sandbox 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068344038Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/883b6a86-d379-4d4b-b7a7-71492754f54d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068368149Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068375436Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068382541Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068388810Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ef15b860-f912-4bc5-8be8-a0811e3946d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068415741Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068422195Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068428175Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068931697Z" level=info msg="NetworkStart: stopping network for sandbox 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.068941197Z" level=info msg="NetworkStart: stopping network for sandbox 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069040904Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/ca7188ee-0382-42db-b97a-37c7187cc520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069073534Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069083303Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069089612Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069130154Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f1342d85-d490-488d-9c9a-cb6d265c343e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069152061Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069159119Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.069165899Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:07.996413 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.996803430Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:07.996841604Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:08.007538359Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f3f5732c-40b3-437d-9c32-f1a52bbded40 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:08.007562117Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:12.996398 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:24:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:12.996925 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:24:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:25.996844 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:24:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:25.997373 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859069 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859090 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859100 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859107 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859114 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859122 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:27.859131 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:24:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:28.141178035Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:24:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:39.996478 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:24:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:39.997130 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:24:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:41.022412594Z" level=info msg="NetworkStart: stopping network for sandbox 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:41.022568646Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/7a0aaa00-74d8-41c2-a833-db805ac53709 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:41.022598608Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:41.022606465Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:41.022613874Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:45.019313334Z" level=info msg="NetworkStart: stopping network for sandbox 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:45.019472801Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c5e50671-5cd7-4e86-91c6-0958b9312d43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:45.019499251Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:45.019506360Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:45.019512697Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:49.021331851Z" level=info msg="NetworkStart: stopping network for sandbox 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:49.021485716Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dae02c7e-1303-4025-a0e2-97d4d0fdd9ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:49.021507909Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:49.021514358Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:49.021520470Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.030636801Z" level=info msg="NetworkStart: stopping network for sandbox e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.030785497Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/3c08fdf2-0b1a-462f-8778-f489f723294f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.030811869Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.030819264Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.030826026Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.032402967Z" level=info msg="NetworkStart: stopping network for sandbox aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.032542639Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/0e1ac1dc-00fb-40fb-a7b4-689c5ad1cf67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.032565267Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.032572320Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.032578628Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.034175515Z" level=info msg="NetworkStart: stopping network for sandbox 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.034317945Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ebc4e1df-f04c-4693-9a36-865bf4aedcd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.034344341Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.034351370Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:50.034357267Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:50.996179 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:24:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:50.996688 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.032869412Z" level=info msg="NetworkStart: stopping network for sandbox faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.032985977Z" level=info msg="NetworkStart: stopping network for sandbox 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033016011Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/2a2b2a19-3ea1-42b3-b1ae-9c01f8a3a15b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033038273Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033046065Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033052708Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033116269Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/641ff746-d7dc-473d-9386-be595a13e00b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033137880Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033145509Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.033152713Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034652541Z" level=info msg="NetworkStart: stopping network for sandbox 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034788008Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/7bd0c57f-aab2-4637-a379-c6101e36b05e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034816034Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034823563Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034831127Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.034826006Z" level=info msg="NetworkStart: stopping network for sandbox fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.035018179Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/c6f73427-2e7b-4ef5-b806-407a9b56142d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.035044930Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.035054123Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:51.035061006Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.019898361Z" level=info msg="NetworkStart: stopping network for sandbox f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.020047680Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2ec9dc9c-9484-4f35-9545-3ed781ec9cd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.020073754Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.020080954Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.020088034Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.077323291Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.077363646Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079004708Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079042758Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079012040Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079123441Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079479777Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079506164Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079624208Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.079651625Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-883b6a86\x2dd379\x2d4d4b\x2db7a7\x2d71492754f54d.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ca7188ee\x2d0382\x2d42db\x2db97a\x2d37c7187cc520.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f1342d85\x2dd490\x2d488d\x2d9c9a\x2dcb6d265c343e.mount has successfully entered the 'dead' state. Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127370717Z" level=info msg="runSandbox: deleting pod ID c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2 from idIndex" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127391022Z" level=info msg="runSandbox: deleting pod ID 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b from idIndex" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127427175Z" level=info msg="runSandbox: removing pod sandbox 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127443424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127392112Z" level=info msg="runSandbox: deleting pod ID 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589 from idIndex" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127485348Z" level=info msg="runSandbox: removing pod sandbox 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127500753Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127515424Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127378535Z" level=info msg="runSandbox: deleting pod ID 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93 from idIndex" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127605222Z" level=info msg="runSandbox: removing pod sandbox 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127619592Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127634091Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127400881Z" level=info msg="runSandbox: removing pod sandbox c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127736046Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127751638Z" level=info msg="runSandbox: unmounting shmPath for sandbox c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127378541Z" level=info msg="runSandbox: deleting pod ID 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6 from idIndex" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127783004Z" level=info msg="runSandbox: removing pod sandbox 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127799730Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127813693Z" level=info msg="runSandbox: unmounting shmPath for sandbox 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.127458389Z" level=info msg="runSandbox: unmounting shmPath for sandbox 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.147550456Z" level=info msg="runSandbox: removing pod sandbox from storage: 1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.147571100Z" level=info msg="runSandbox: removing pod sandbox from storage: 466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.147574600Z" level=info msg="runSandbox: removing pod sandbox from storage: c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.147574910Z" level=info msg="runSandbox: removing pod sandbox from storage: 783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.147559292Z" level=info msg="runSandbox: removing pod sandbox from storage: 8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.150920287Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.150940334Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=5d89c96f-af64-46af-b70f-201c6b50ae6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.151092 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.151142 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.151167 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.151223 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.156619842Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.156748037Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9f52c113-3761-4171-afb9-6e68c6f30ec8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.159186 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.159234 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.159256 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.159304 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.162297111Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.162318901Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=5c62c905-3d19-4d00-9c20-7c91a66a9ee6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.162437 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.162480 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.162507 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.162558 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.165426082Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.165447185Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=371e127b-f4f4-4552-bf01-dfa335e0e234 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.165683 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.165718 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.165738 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.165774 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.168468141Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.168485355Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d8d4c176-ff21-4e74-ac9f-a82e9d7e87c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.168582 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.168614 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.168633 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:24:52.168671 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:52.201406 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:52.201560 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:52.201607 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201752857Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201784404Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:52.201798 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:24:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:24:52.201881 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201872501Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201907421Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201930277Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.201958524Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.202032416Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.202060830Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.202076125Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.202108203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.228002111Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/77b7dd32-566c-4113-9576-3ef2d7c2bd97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.228222061Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.228463175Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/9faaf861-9770-4a41-b140-08ed5d323c87 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.228481358Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.229838241Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fa6470c7-7136-43ab-a834-a96fb1d48036 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.229856346Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.232091279Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/afab377d-89b8-41e7-86c8-ad36d2175fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.232110263Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.233252287Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/8f64fa06-b1a9-430e-8c07-83f22809fc1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:52.233271967Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:53.019803917Z" level=info msg="NetworkStart: stopping network for sandbox 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:24:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:53.019965903Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f3f5732c-40b3-437d-9c32-f1a52bbded40 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:24:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:53.019990587Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:24:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:53.019998170Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:24:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:53.020005011Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ef15b860\x2df912\x2d4bc5\x2d8be8\x2da0811e3946d1.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2ccd388e\x2d4dd6\x2d43d5\x2d95a0\x2db0a02edcc6c9.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1806212a8b020a5b373a770ba861d25e1b38d94a677f1ec23fc2ab95cd6ab589-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c8c5ed071e737ab8c79d8999d7ce7912eaa61ed0afe248adb6c67d00b84454f2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8a94c11e9dbe42e025b7d873151a1073947a1ed7c90f01b242c45bc449ceba93-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-466b645d13f03c71db66670220fc7dbe54f4568f192c69875228ffba99084bd6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:24:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-783fc453fe41048ea2b82ed5a553c4af3a7f22bccf46c97cbace57ea6423656b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:24:58.146534944Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:25:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:02.996603 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:25:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:02.997302 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1196] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1202] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1203] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1204] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1209] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491108.1214] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:25:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491109.7694] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:25:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:15.996325 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:25:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:15.996837 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd[1]: Starting Cleanup of Temporary Directories... -- Subject: Unit systemd-tmpfiles-clean.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-clean.service has begun starting up. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd-tmpfiles[32422]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd[1]: systemd-tmpfiles-clean.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-tmpfiles-clean.service has successfully entered the 'dead' state. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd[1]: Started Cleanup of Temporary Directories. -- Subject: Unit systemd-tmpfiles-clean.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit systemd-tmpfiles-clean.service has finished starting up. -- -- The start-up result is done. Jan 23 16:25:20 hub-master-0.workload.bos2.lab systemd[1]: systemd-tmpfiles-clean.service: Consumed 13ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit systemd-tmpfiles-clean.service completed and consumed the indicated resources. Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.033484392Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.033925184Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount has successfully entered the 'dead' state. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount completed and consumed the indicated resources. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount has successfully entered the 'dead' state. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount completed and consumed the indicated resources. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount has successfully entered the 'dead' state. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7a0aaa00\x2d74d8\x2d41c2\x2da833\x2ddb805ac53709.mount completed and consumed the indicated resources. Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.068317390Z" level=info msg="runSandbox: deleting pod ID 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38 from idIndex" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.068342538Z" level=info msg="runSandbox: removing pod sandbox 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.068356121Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.068369214Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.083424620Z" level=info msg="runSandbox: removing pod sandbox from storage: 8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.086343608Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:26.086361185Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3d47f557-9d9c-4d6c-a5fd-15ac10304d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:26.086582 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:26.086861 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:26.086889 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:26.086945 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(8030dee716e3230148e1521e24e0ebc847878b02b62f8385cc2ef0c79bd0bb38): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859285 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859325 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859336 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859344 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859355 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859376 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:27.859386 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:25:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:27.864926504Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=510aef91-838a-46ea-8170-309e8dde54ef name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:27.865039675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=510aef91-838a-46ea-8170-309e8dde54ef name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:28.143088625Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.030449383Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.030492498Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount has successfully entered the 'dead' state. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount completed and consumed the indicated resources. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount has successfully entered the 'dead' state. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount completed and consumed the indicated resources. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount has successfully entered the 'dead' state. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c5e50671\x2d5cd7\x2d4e86\x2d91c6\x2d0958b9312d43.mount completed and consumed the indicated resources. Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.073406813Z" level=info msg="runSandbox: deleting pod ID 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc from idIndex" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.073439127Z" level=info msg="runSandbox: removing pod sandbox 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.073455194Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.073472675Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.081469369Z" level=info msg="runSandbox: removing pod sandbox from storage: 89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.088273804Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:30.088301386Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d93d7456-a0d5-4bc9-ab04-a52a1840384a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:30.088541 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:30.088592 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:30.088617 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:30.088673 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(89b2a80fd8f545995deb13d62e9891f6be3f268c52f3938f7ea78a7c3f3b9cdc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:30.996652 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:30.997172 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.033077728Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.033118848Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount has successfully entered the 'dead' state. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount completed and consumed the indicated resources. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount has successfully entered the 'dead' state. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount completed and consumed the indicated resources. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount has successfully entered the 'dead' state. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dae02c7e\x2d1303\x2d4025\x2da0e2\x2d97d4d0fdd9ee.mount completed and consumed the indicated resources. Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.079307070Z" level=info msg="runSandbox: deleting pod ID 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c from idIndex" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.079333616Z" level=info msg="runSandbox: removing pod sandbox 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.079347595Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.079362179Z" level=info msg="runSandbox: unmounting shmPath for sandbox 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.095414628Z" level=info msg="runSandbox: removing pod sandbox from storage: 28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.098660589Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:34.098677534Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=c3867ed4-88dc-4917-80c8-eed1846628e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:34.098863 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:34.098914 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:34.098938 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:34.098988 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(28bd5dd7125a9dd4b27a1a03404a5099b9232271d314770324f84b4efc1dc64c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.041791213Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.041839875Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.042924708Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.042961047Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.044767072Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.044801182Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ebc4e1df\x2df04c\x2d4693\x2d9a36\x2d865bf4aedcd9.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e1ac1dc\x2d00fb\x2d40fb\x2da7b4\x2d689c5ad1cf67.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount has successfully entered the 'dead' state. Jan 23 16:25:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c08fdf2\x2d0b1a\x2d462f\x2d8778\x2df489f723294f.mount completed and consumed the indicated resources. Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088308795Z" level=info msg="runSandbox: deleting pod ID aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec from idIndex" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088334192Z" level=info msg="runSandbox: removing pod sandbox aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088347115Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088360271Z" level=info msg="runSandbox: unmounting shmPath for sandbox aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088410573Z" level=info msg="runSandbox: deleting pod ID 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991 from idIndex" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088443834Z" level=info msg="runSandbox: removing pod sandbox 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088460155Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.088475042Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.089306048Z" level=info msg="runSandbox: deleting pod ID e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569 from idIndex" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.089330294Z" level=info msg="runSandbox: removing pod sandbox e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.089344046Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.089357789Z" level=info msg="runSandbox: unmounting shmPath for sandbox e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.096547069Z" level=info msg="runSandbox: removing pod sandbox from storage: 9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.100001869Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.100022924Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=11befe7c-9739-4c8e-b620-ade9e6228b8b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.100293 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.100338 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.100362 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.100452 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.100429223Z" level=info msg="runSandbox: removing pod sandbox from storage: e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.100453162Z" level=info msg="runSandbox: removing pod sandbox from storage: aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.103722144Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.103741144Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d1f67ff3-83ca-4b4f-ae67-8a4cd35cc2f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.104009 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.104046 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.104068 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.104109 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.107060643Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:35.107079831Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=26d490b3-1e14-4749-a33b-4c06cb494669 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.107291 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.107329 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.107351 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:35.107394 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.043518712Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.043552031Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.043713462Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.043742751Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.044669959Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.044697556Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.045944464Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.045976910Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9da4eb9f8e7600ed7545cb13b3d13e065aef9d3b5ddd79c3079b32b4f3ce1991-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aba205beac0690da06ede537ae6a67fcdb2ca1f2b406a45f5a9477c7bb19dfec-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e82324e7c32eae9ed81bf65430dfd5d4457d2e48017bd82aceaf9d66c958e569-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount has successfully entered the 'dead' state. Jan 23 16:25:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount completed and consumed the indicated resources. Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087319711Z" level=info msg="runSandbox: deleting pod ID 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47 from idIndex" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087353786Z" level=info msg="runSandbox: removing pod sandbox 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087371181Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087385451Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087320729Z" level=info msg="runSandbox: deleting pod ID 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8 from idIndex" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087438471Z" level=info msg="runSandbox: removing pod sandbox 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087451898Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.087466542Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.095304162Z" level=info msg="runSandbox: deleting pod ID faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8 from idIndex" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.095329884Z" level=info msg="runSandbox: removing pod sandbox faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.095342580Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.095354727Z" level=info msg="runSandbox: unmounting shmPath for sandbox faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.096300031Z" level=info msg="runSandbox: deleting pod ID fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb from idIndex" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.096326798Z" level=info msg="runSandbox: removing pod sandbox fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.096341860Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.096352931Z" level=info msg="runSandbox: unmounting shmPath for sandbox fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.107444631Z" level=info msg="runSandbox: removing pod sandbox from storage: 8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.108408886Z" level=info msg="runSandbox: removing pod sandbox from storage: 5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.110611316Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.110629894Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8e14c0a6-26eb-445a-aa90-a5684e40bd29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.110863 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.110910 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.110933 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.110986 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.111410661Z" level=info msg="runSandbox: removing pod sandbox from storage: fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.111432871Z" level=info msg="runSandbox: removing pod sandbox from storage: faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.114027144Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.114045599Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b5d8f9d5-1b40-4de2-bdca-4f4761ebe513 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.114276 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.114314 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.114335 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.114376 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.116990147Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.117008071Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d774aaf1-c6e0-4af1-9e8f-ae54e24d1a0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.117265 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.117301 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.117322 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.117361 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.119958166Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.119975973Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=827977ba-1bb2-494b-8606-805db9cbd3c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.120141 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.120180 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.120209 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:36.120254 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:25:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:36.996258 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.996672074Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:36.996719720Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.008809406Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/74724d4d-0995-42c2-8e4b-021ef9207ecf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.008832377Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.031149661Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.031180505Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7bd0c57f\x2daab2\x2d4637\x2da379\x2dc6101e36b05e.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-641ff746\x2dd7dc\x2d473d\x2d9386\x2dbe595a13e00b.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a2b2a19\x2d3ea1\x2d42b3\x2db1ae\x2d9c01f8a3a15b.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c6f73427\x2d2e7b\x2d4ef5\x2db806\x2d407a9b56142d.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5f4ef7ae32709f6cedff5bf424d3430c02b35bcde667cb50a718006c1520ce47-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8ff0fe01959c091b256bce906a9825d71f2d22d3961ed03f0e2e3ada3a6ac6d8-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-faf15c62ff9c384ef18c5294c998829284b6b9844054c41f52cf93ce5a3cbec8-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fbd7111026dd3426eab0eb12c8b611a7a383189c6c401896548740d7f818cefb-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2ec9dc9c\x2d9484\x2d4f35\x2d9545\x2d3ed781ec9cd8.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.077391551Z" level=info msg="runSandbox: deleting pod ID f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56 from idIndex" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.077417067Z" level=info msg="runSandbox: removing pod sandbox f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.077432537Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.077447468Z" level=info msg="runSandbox: unmounting shmPath for sandbox f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.085446191Z" level=info msg="runSandbox: removing pod sandbox from storage: f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.087982687Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.088002578Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=28aa0641-2129-4149-ab71-56f328650c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:37.088230 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:37.088409 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:37.088435 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:37.088480 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f81fb1a1b06e23316a7adeaffc1ef8be5213c25bfe86a5915f58d8a38aa0fc56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242670944Z" level=info msg="NetworkStart: stopping network for sandbox 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242801774Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/9faaf861-9770-4a41-b140-08ed5d323c87 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242824840Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242831591Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242838709Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.242996329Z" level=info msg="NetworkStart: stopping network for sandbox 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243110911Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fa6470c7-7136-43ab-a834-a96fb1d48036 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243134318Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243141523Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243147423Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243223906Z" level=info msg="NetworkStart: stopping network for sandbox 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243342374Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/77b7dd32-566c-4113-9576-3ef2d7c2bd97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243360813Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243367356Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.243373367Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.244268800Z" level=info msg="NetworkStart: stopping network for sandbox 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.244390980Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/afab377d-89b8-41e7-86c8-ad36d2175fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.244415451Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.244423304Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.244430065Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.246825292Z" level=info msg="NetworkStart: stopping network for sandbox a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.246933208Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/8f64fa06-b1a9-430e-8c07-83f22809fc1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.246954981Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.246962029Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:37.246969121Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.030896619Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.030931496Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount has successfully entered the 'dead' state. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount completed and consumed the indicated resources. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount has successfully entered the 'dead' state. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount completed and consumed the indicated resources. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount has successfully entered the 'dead' state. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f3f5732c\x2d40b3\x2d437d\x2d9c32\x2df1a52bbded40.mount completed and consumed the indicated resources. Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.073307191Z" level=info msg="runSandbox: deleting pod ID 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b from idIndex" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.073336245Z" level=info msg="runSandbox: removing pod sandbox 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.073353257Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.073368118Z" level=info msg="runSandbox: unmounting shmPath for sandbox 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:25:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.082446422Z" level=info msg="runSandbox: removing pod sandbox from storage: 041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.085679193Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:38.085698484Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4bc365b8-cf2e-461c-b019-966484e49394 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:38.085916 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:25:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:38.085966 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:25:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:38.085989 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:25:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:38.086038 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(041b318886f8c37ac9f94d227620bbf18a9353b111e170a5dea2b2e8ee51337b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:25:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:41.996364 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:25:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:41.996760855Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:41.996816761Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:41.996835 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:25:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:41.997329 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:25:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:42.013337634Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a7ea973b-e0b7-434b-bdee-74231156faad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:42.013364663Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:45.996499 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:45.996799044Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:45.996848557Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.007768306Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/07aa5045-d7e3-4474-8404-fdebbbf7f999 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.007791376Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:46.995846 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:25:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:46.995908 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:46.996113 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:25:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:46.996236 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996432897Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996497636Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996528041Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996552441Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996432894Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996588038Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996602180Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:46.996509217Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.021330780Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/e80b3ca1-306f-452b-9418-6a8644191484 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.021354173Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.022582688Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/479cda90-d2ab-4ff1-a88a-edc56201bf13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.022601050Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.024358928Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/13843d49-e0f2-43b6-b031-cfba620cfa5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.024376951Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.026427360Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/8ec13194-2b60-4f95-9689-a6beb71b66db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:47.026450306Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:48.996363 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:48.996453 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:48.996773544Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:48.996979575Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:48.996857292Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:48.997089678Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.011832802Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/cfd222a9-891e-476b-9e92-603816a66781 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.011871308Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.016956340Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/f49a0aed-f892-4886-8b12-b5b956a18e3c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.016979464Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:49.995891 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:25:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:49.995976 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.996276589Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.996324211Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.996363863Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:49.996329240Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:50.011705634Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/226dacce-c098-4bf7-9a66-fd0c59b93d2f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:50.011728606Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:50.012189985Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c8eda846-8586-4649-97e2-913b5d04bc64 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:50.012219339Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:51.995806 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:51.996226524Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:51.996269926Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:52.007550696Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/140cc397-4e0d-4095-83c5-b40e13dbc37b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:25:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:52.007571554Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:25:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:52.997062 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:25:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:25:52.997704 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:25:53 hub-master-0.workload.bos2.lab conmon[10200]: conmon 274b97d85b2bd8c34760 : container 10337 exited with status 1 Jan 23 16:25:53 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has successfully entered the 'dead' state. Jan 23 16:25:53 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope: Consumed 57ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope completed and consumed the indicated resources. Jan 23 16:25:53 hub-master-0.workload.bos2.lab systemd[1]: crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope has successfully entered the 'dead' state. Jan 23 16:25:53 hub-master-0.workload.bos2.lab systemd[1]: crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope: Consumed 3.760s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da.scope completed and consumed the indicated resources. Jan 23 16:25:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:54.314243 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da" exitCode=1 Jan 23 16:25:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:54.314273 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da} Jan 23 16:25:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:54.314528 8631 scope.go:115] "RemoveContainer" containerID="274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.315038913Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=76713387-9f28-4539-b318-7ad2253784bd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.315225464Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=76713387-9f28-4539-b318-7ad2253784bd name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.315838817Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=50b85684-0a9b-4196-b9c9-5ff2b1ac6adb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.315936713Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=50b85684-0a9b-4196-b9c9-5ff2b1ac6adb name=/runtime.v1.ImageService/ImageStatus Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.316826298Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=99014032-8c05-4e55-bcb0-c31a16c46910 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.316907252Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:25:54 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope. -- Subject: Unit crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:25:54 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868. -- Subject: Unit crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.480249713Z" level=info msg="Created container 6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868: openshift-multus/multus-cdt6c/kube-multus" id=99014032-8c05-4e55-bcb0-c31a16c46910 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.480898121Z" level=info msg="Starting container: 6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868" id=56c25836-c469-4813-9992-0fa1ee0b3d3d name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.500949902Z" level=info msg="Started container" PID=33554 containerID=6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868 description=openshift-multus/multus-cdt6c/kube-multus id=56c25836-c469-4813-9992-0fa1ee0b3d3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.505473890Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_309dcd6b-53bd-49c2-8527-a8baec4dcd47\"" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.516137512Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.516160047Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.528674377Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.538243171Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.538265293Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:25:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:54.538278721Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_309dcd6b-53bd-49c2-8527-a8baec4dcd47\"" Jan 23 16:25:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:25:55.317743 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868} Jan 23 16:25:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:25:58.144459434Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:26:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:04.996428 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:04.997069 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:26:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:16.996941 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:16.997448 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.024500170Z" level=info msg="NetworkStart: stopping network for sandbox a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.024880660Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/74724d4d-0995-42c2-8e4b-021ef9207ecf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.024904443Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.024911438Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.024918213Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.253862287Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.253899764Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.254259056Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.254286516Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.254273430Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.254354987Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.255371560Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.255415105Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.256818750Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.256851657Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-afab377d\x2d89b8\x2d41e7\x2d86c8\x2dad36d2175fb1.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9faaf861\x2d9770\x2d4a41\x2db140\x2d08ed5d323c87.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount has successfully entered the 'dead' state. Jan 23 16:26:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-77b7dd32\x2d566c\x2d4113\x2d9576\x2d3ef2d7c2bd97.mount completed and consumed the indicated resources. Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297374374Z" level=info msg="runSandbox: deleting pod ID 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e from idIndex" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297410683Z" level=info msg="runSandbox: removing pod sandbox 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297427824Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297444983Z" level=info msg="runSandbox: unmounting shmPath for sandbox 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297375287Z" level=info msg="runSandbox: deleting pod ID 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556 from idIndex" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297513185Z" level=info msg="runSandbox: removing pod sandbox 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297532904Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297546383Z" level=info msg="runSandbox: unmounting shmPath for sandbox 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297380271Z" level=info msg="runSandbox: deleting pod ID 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c from idIndex" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297594483Z" level=info msg="runSandbox: removing pod sandbox 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297608249Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.297624086Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301319759Z" level=info msg="runSandbox: deleting pod ID a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900 from idIndex" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301344506Z" level=info msg="runSandbox: removing pod sandbox a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301359417Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301371917Z" level=info msg="runSandbox: unmounting shmPath for sandbox a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301346878Z" level=info msg="runSandbox: deleting pod ID 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d from idIndex" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301441166Z" level=info msg="runSandbox: removing pod sandbox 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301454261Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.301465643Z" level=info msg="runSandbox: unmounting shmPath for sandbox 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.314447715Z" level=info msg="runSandbox: removing pod sandbox from storage: 5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.314503582Z" level=info msg="runSandbox: removing pod sandbox from storage: 11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.315430788Z" level=info msg="runSandbox: removing pod sandbox from storage: 232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.317855258Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.317874717Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=bf6b4dba-98b9-4cc2-8f13-ac2ceb574659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.318426 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.318473 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.318494 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.318544 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.321054869Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.321072061Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=71afa18f-d4f7-4cc5-a114-1a9615787774 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.321437277Z" level=info msg="runSandbox: removing pod sandbox from storage: a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.321355 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.321505 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.321527 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.321566 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.322428795Z" level=info msg="runSandbox: removing pod sandbox from storage: 14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.324490629Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.324508735Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=0e8e1ce8-e4d8-48ec-b50a-e0f426232929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.324777 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.324834 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.324859 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.324906 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.327584813Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.327601064Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d65563c1-168f-4a72-b0d5-c1d2cef45486 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.327841 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.327884 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.327905 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.327942 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.334234996Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.334256961Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=51e39ae4-59d5-4921-85fe-a177f90ec01a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.334385 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.334419 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.334440 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:22.334483 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:22.366586 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:22.366693 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:22.366898 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:22.366946 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.366982290Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367011921Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:22.367054 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367128267Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367158370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367230369Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367256463Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367264671Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367279340Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367233965Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.367310891Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.392966350Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ddd57881-ec95-47d9-9813-efe5ce374adf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.392987977Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.393511141Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/3dc5da99-54f7-49ab-b7ba-35b927545579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.393528849Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.394780244Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/029bc923-7fd3-4202-8611-15aab50c289a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.394799127Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.395799939Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3c4a677d-e03f-45a0-8d35-f13c5508636a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.395820774Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.396510021Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/83aba413-e192-49d1-87ab-902df0ee4b1d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:22.396531563Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8f64fa06\x2db1a9\x2d430e\x2d8c07\x2d83f22809fc1e.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fa6470c7\x2d7136\x2d43ab\x2da834\x2da96fb1d48036.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a1cf15f5c9013166042f5c99e3893ffecf645a9b5098cce435f0072b2d8ac900-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-11d0b45bbeada477bbad00dfb4ec2e31a048e623c70db04faab556ad97a3e43e-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-14c87db26d0c9d081acecd497143f406c380e2dc9e6a06ff90b67304bb5eba2d-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-232378a4b0559bc02f37a74321fbfda160b9ad3dd30eed985181b37b07c20556-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c-userdata-shm.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5928651741adf20b686181585d05b7cd3f45fbc5022c552190a0f68cf694627c-userdata-shm.mount completed and consumed the indicated resources. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[25135]: Starting Cleanup of User's Temporary Files and Directories... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 16:26:23 hub-master-0.workload.bos2.lab systemd[25135]: Started Cleanup of User's Temporary Files and Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 16:26:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:27.027475346Z" level=info msg="NetworkStart: stopping network for sandbox e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:27.027793389Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a7ea973b-e0b7-434b-bdee-74231156faad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:27.027816784Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:27.027823067Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:27.027829479Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860080 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860102 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860110 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860116 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860128 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860136 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:27.860143 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:26:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:28.142650620Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:26:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:30.996445 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:30.997104 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:26:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:31.021828505Z" level=info msg="NetworkStart: stopping network for sandbox 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:31.022042196Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/07aa5045-d7e3-4474-8404-fdebbbf7f999 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:31.022067867Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:31.022076005Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:31.022083890Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035594811Z" level=info msg="NetworkStart: stopping network for sandbox ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035757406Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/479cda90-d2ab-4ff1-a88a-edc56201bf13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035785638Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035793926Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035804033Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035938239Z" level=info msg="NetworkStart: stopping network for sandbox f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.035995544Z" level=info msg="NetworkStart: stopping network for sandbox 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036063188Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/13843d49-e0f2-43b6-b031-cfba620cfa5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036091090Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036099375Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036105890Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036107146Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/e80b3ca1-306f-452b-9418-6a8644191484 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036215332Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036224540Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.036232113Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.038844308Z" level=info msg="NetworkStart: stopping network for sandbox 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.038977744Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/8ec13194-2b60-4f95-9689-a6beb71b66db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.039000519Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.039007240Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:32.039014130Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.026678028Z" level=info msg="NetworkStart: stopping network for sandbox 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.026815042Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/cfd222a9-891e-476b-9e92-603816a66781 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.026837821Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.026845337Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.026852026Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.030070762Z" level=info msg="NetworkStart: stopping network for sandbox 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.030221909Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/f49a0aed-f892-4886-8b12-b5b956a18e3c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.030246428Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.030253755Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:34.030260488Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025302086Z" level=info msg="NetworkStart: stopping network for sandbox 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025468940Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c8eda846-8586-4649-97e2-913b5d04bc64 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025494929Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025503525Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025510523Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025684129Z" level=info msg="NetworkStart: stopping network for sandbox d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025816300Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/226dacce-c098-4bf7-9a66-fd0c59b93d2f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025841466Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025848068Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:35.025854308Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:37.023540930Z" level=info msg="NetworkStart: stopping network for sandbox 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:37.023680290Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/140cc397-4e0d-4095-83c5-b40e13dbc37b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:37.023701792Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:37.023708777Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:37.023715398Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1213] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1218] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1219] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1412] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1413] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1425] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1428] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1428] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1430] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1433] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491198.1435] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:26:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491199.5376] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:26:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:43.997091 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:43.997650 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:26:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:55.996901 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.997742821Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=7e726d57-aa15-4a75-bc33-0df66c7a7626 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.998060410Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7e726d57-aa15-4a75-bc33-0df66c7a7626 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.998585856Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=e3f9876b-00ec-4b9b-884f-fc65c049e20d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.998689357Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e3f9876b-00ec-4b9b-884f-fc65c049e20d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.999586420Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=51249b64-e66b-4fd9-b55c-4beb9a386876 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:26:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:55.999657585Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope. -- Subject: Unit crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6. -- Subject: Unit crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.120929917Z" level=info msg="Created container 8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=51249b64-e66b-4fd9-b55c-4beb9a386876 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.121486982Z" level=info msg="Starting container: 8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" id=548b3abe-16f5-4b76-b4b8-31be175186ba name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.128754978Z" level=info msg="Started container" PID=35453 containerID=8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=548b3abe-16f5-4b76-b4b8-31be175186ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.133212039Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.144156634Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.144177137Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.144189807Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.152837913Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.152857607Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.152868853Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.161344280Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.161362999Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.161371837Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.169300908Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.169318828Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.169330354Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.176857427Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:26:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:56.176882863Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:26:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:56.431408 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/180.log" Jan 23 16:26:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:56.432260 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6} Jan 23 16:26:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:56.432643 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:26:56 hub-master-0.workload.bos2.lab conmon[35441]: conmon 8005f7d165268f47b1ba : container 35453 exited with status 1 Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has successfully entered the 'dead' state. Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope: Consumed 575ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope completed and consumed the indicated resources. Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope has successfully entered the 'dead' state. Jan 23 16:26:56 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6.scope completed and consumed the indicated resources. Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.436028 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/181.log" Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.436606 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/180.log" Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.437831 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" exitCode=1 Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.437859 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6} Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.437881 8631 scope.go:115] "RemoveContainer" containerID="c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:57.438730 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:26:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:57.438889877Z" level=info msg="Removing container: c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333" id=2e67abd4-2db5-49e9-90a2-13843f88d701 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:26:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:57.439278 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:26:57 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-68061f11cf5025287c21de1ff2e0acb5f7a1ad81f89b76bb59178a39f6cd21c5-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-68061f11cf5025287c21de1ff2e0acb5f7a1ad81f89b76bb59178a39f6cd21c5-merged.mount has successfully entered the 'dead' state. Jan 23 16:26:57 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-68061f11cf5025287c21de1ff2e0acb5f7a1ad81f89b76bb59178a39f6cd21c5-merged.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-68061f11cf5025287c21de1ff2e0acb5f7a1ad81f89b76bb59178a39f6cd21c5-merged.mount completed and consumed the indicated resources. Jan 23 16:26:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:57.476328512Z" level=info msg="Removed container c865eeedc39931f729117e3ad39dd05ae08d6d4486964852163d894f795e9333: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2e67abd4-2db5-49e9-90a2-13843f88d701 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:26:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:26:58.142984011Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:26:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:58.440758 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/181.log" Jan 23 16:26:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:26:58.442877 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:26:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:26:58.443434 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.036651058Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.036899506Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount has successfully entered the 'dead' state. Jan 23 16:27:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount has successfully entered the 'dead' state. Jan 23 16:27:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-74724d4d\x2d0995\x2d42c2\x2d8e4b\x2d021ef9207ecf.mount has successfully entered the 'dead' state. Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.068360637Z" level=info msg="runSandbox: deleting pod ID a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b from idIndex" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.068392764Z" level=info msg="runSandbox: removing pod sandbox a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.068409608Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.068424475Z" level=info msg="runSandbox: unmounting shmPath for sandbox a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.080423581Z" level=info msg="runSandbox: removing pod sandbox from storage: a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.083825274Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.083844600Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c982545d-eb3a-4944-a084-7cdbab1ad227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:07.084073 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:07.084123 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:07.084149 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:07.084199 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a68808451e18b73276e17d5ae566c3cf006b038b054afc6e1f97017157116f2b): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406108055Z" level=info msg="NetworkStart: stopping network for sandbox 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406252156Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/3dc5da99-54f7-49ab-b7ba-35b927545579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406276005Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406283644Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406290450Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406636295Z" level=info msg="NetworkStart: stopping network for sandbox a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406745012Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ddd57881-ec95-47d9-9813-efe5ce374adf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406764471Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406770949Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.406777567Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.407795246Z" level=info msg="NetworkStart: stopping network for sandbox ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.407924670Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/029bc923-7fd3-4202-8611-15aab50c289a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.407945638Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.407952069Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.407958453Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409493217Z" level=info msg="NetworkStart: stopping network for sandbox cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409602062Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3c4a677d-e03f-45a0-8d35-f13c5508636a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409621520Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409627991Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409635015Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409845059Z" level=info msg="NetworkStart: stopping network for sandbox fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409968154Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/83aba413-e192-49d1-87ab-902df0ee4b1d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.409993234Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.410001592Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:27:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:07.410008853Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.041151815Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.041199461Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount has successfully entered the 'dead' state. Jan 23 16:27:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount has successfully entered the 'dead' state. Jan 23 16:27:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7ea973b\x2de0b7\x2d434b\x2dbdee\x2d74231156faad.mount has successfully entered the 'dead' state. Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.094328275Z" level=info msg="runSandbox: deleting pod ID e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c from idIndex" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.094353229Z" level=info msg="runSandbox: removing pod sandbox e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.094366745Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.094384166Z" level=info msg="runSandbox: unmounting shmPath for sandbox e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.107453555Z" level=info msg="runSandbox: removing pod sandbox from storage: e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.110716470Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:12.110735649Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=52d2bf0b-2143-44a6-a621-dc26c339af06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:12.110968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:12.111031 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:12.111056 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:12.111109 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e107d32c336b3e962f7d81e89ea77166ac77d3e4327396d17b3c4a493149a79c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:27:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:13.996933 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:27:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:13.997478 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.033911101Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.033956607Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount has successfully entered the 'dead' state. Jan 23 16:27:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount has successfully entered the 'dead' state. Jan 23 16:27:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-07aa5045\x2dd7e3\x2d4474\x2d8404\x2dfdebbbf7f999.mount has successfully entered the 'dead' state. Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.073304831Z" level=info msg="runSandbox: deleting pod ID 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137 from idIndex" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.073333880Z" level=info msg="runSandbox: removing pod sandbox 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.073348903Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.073373774Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.085421924Z" level=info msg="runSandbox: removing pod sandbox from storage: 1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.088957419Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:16.088976600Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6d9e51ac-f448-4bff-88f9-4b3c65780b86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:16.089180 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:16.089234 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:16.089259 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:16.089313 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(1e12b287ecf52a26a6bcbf883683604c0fc41a2e2747c181819338edfd3d2137): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047366240Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047409861Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047657837Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047690165Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047782280Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.047813521Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.050975705Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.051008663Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount has successfully entered the 'dead' state. Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.093397583Z" level=info msg="runSandbox: deleting pod ID ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c from idIndex" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.093429060Z" level=info msg="runSandbox: removing pod sandbox ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.093446221Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.093465944Z" level=info msg="runSandbox: unmounting shmPath for sandbox ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095300181Z" level=info msg="runSandbox: deleting pod ID 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d from idIndex" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095328456Z" level=info msg="runSandbox: removing pod sandbox 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095341360Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095354389Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095378503Z" level=info msg="runSandbox: deleting pod ID f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e from idIndex" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095407454Z" level=info msg="runSandbox: removing pod sandbox f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095421763Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.095439880Z" level=info msg="runSandbox: unmounting shmPath for sandbox f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.096350528Z" level=info msg="runSandbox: deleting pod ID 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78 from idIndex" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.096377693Z" level=info msg="runSandbox: removing pod sandbox 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.096391873Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.096404792Z" level=info msg="runSandbox: unmounting shmPath for sandbox 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.100393678Z" level=info msg="runSandbox: removing pod sandbox from storage: ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.103412132Z" level=info msg="runSandbox: removing pod sandbox from storage: f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.103827114Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.103847911Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c7665ae5-08ff-4a28-add2-18519b231e0a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.104138 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.104190 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.104231 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.104292 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.106953350Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.106971605Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=9e39a523-83af-483b-823b-a1abe21d0ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.107178 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.107223 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.107249 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.107291 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.107418007Z" level=info msg="runSandbox: removing pod sandbox from storage: 3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.107465637Z" level=info msg="runSandbox: removing pod sandbox from storage: 488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.110655440Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.110673525Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f2b98f7d-0e51-4d58-b687-553af83ccbf4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.110897 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.110935 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.110960 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.111019 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.114013303Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:17.114036718Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=baebe421-5352-40b7-bdb0-d861e92e1603 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.114289 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.114322 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.114345 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:27:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:17.114384 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8ec13194\x2d2b60\x2d4f95\x2d9689\x2da6beb71b66db.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-13843d49\x2de0f2\x2d43b6\x2db031\x2dcfba620cfa5e.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-479cda90\x2dd2ab\x2d4ff1\x2da88a\x2dedc56201bf13.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e80b3ca1\x2d306f\x2d452b\x2d9418\x2d6a8644191484.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ee79dedf96eaf820a5517a82ed435d8ced6fa2202d0069dab1744020913a1b9c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f049225445da4256395fbbba3c898d92579d6d8ac90367da6d1685cc3c05a65e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-488f8b76992035b96239cd2f0c6f5cb68969c1b9c78e28ce991a8efe15a1ba78-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3fbc2bdf681a382a826d414c59a98d8adfa8c663b6e4cf230fa73247b61a623d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:18.996082 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:18.996397901Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:18.996435030Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.008799431Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/bcb4aa51-2b76-4856-9c74-bf6e10add049 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.008819533Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.038386436Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.038417379Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.041017539Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.041052139Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f49a0aed\x2df892\x2d4886\x2d8b12\x2db5b956a18e3c.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.075281601Z" level=info msg="runSandbox: deleting pod ID 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0 from idIndex" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.075307028Z" level=info msg="runSandbox: removing pod sandbox 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.075323315Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.075337666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cfd222a9\x2d891e\x2d476b\x2d9e92\x2d603816a66781.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.079282651Z" level=info msg="runSandbox: deleting pod ID 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824 from idIndex" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.079308360Z" level=info msg="runSandbox: removing pod sandbox 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.079321952Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.079334606Z" level=info msg="runSandbox: unmounting shmPath for sandbox 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.088455100Z" level=info msg="runSandbox: removing pod sandbox from storage: 79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.091044303Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.091063484Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=6bfac3b4-14a6-4515-8a7d-5355d4c90510 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.091271 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.091309 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.091332 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.091378 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(79b7ab61560969aab20af0cbf82035b557c639d8a03b1c6b5e2f5c28c6b85fe0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.092453897Z" level=info msg="runSandbox: removing pod sandbox from storage: 31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.095759157Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:19.095779420Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e6d77995-1880-4d88-b8a5-c757cd6a5d25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.095907 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.095939 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.095959 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:27:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:19.096006 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.036716252Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.036748748Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.038001999Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.038030285Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-31998d00867809b2df8fec87d31129b0ab546326aefa51ca8f72bbbd51520824-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c8eda846\x2d8586\x2d4649\x2d97e2\x2d913b5d04bc64.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-226dacce\x2dc098\x2d4bf7\x2d9a66\x2dfd0c59b93d2f.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.075307447Z" level=info msg="runSandbox: deleting pod ID 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5 from idIndex" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.075332953Z" level=info msg="runSandbox: removing pod sandbox 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.075347310Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.075361574Z" level=info msg="runSandbox: unmounting shmPath for sandbox 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.077278836Z" level=info msg="runSandbox: deleting pod ID d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd from idIndex" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.077302210Z" level=info msg="runSandbox: removing pod sandbox d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.077315251Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.077326374Z" level=info msg="runSandbox: unmounting shmPath for sandbox d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.088433188Z" level=info msg="runSandbox: removing pod sandbox from storage: 151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.089417676Z" level=info msg="runSandbox: removing pod sandbox from storage: d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.091779643Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.091799325Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5aea3a9c-94df-400e-a009-33a102c0050d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.092067 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.092114 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.092139 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.092191 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(151ccb525baaadd4ec8dcd9b2dbbde79bfc6afceabc7b206cb1786f17237f5b5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.094821485Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:20.094838929Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=d84c1b55-45a3-466d-8a5a-af119ec04fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.095084 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.095121 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.095145 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:20.095191 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(d5e943c9ea328ebdf49a53b78425eceb54d4cacd036b99a31c24d4a0fd6dd8cd): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.035564457Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.035802751Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount has successfully entered the 'dead' state. Jan 23 16:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount has successfully entered the 'dead' state. Jan 23 16:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-140cc397\x2d4e0d\x2d4095\x2d83c5\x2db40e13dbc37b.mount has successfully entered the 'dead' state. Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.081305968Z" level=info msg="runSandbox: deleting pod ID 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78 from idIndex" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.081329161Z" level=info msg="runSandbox: removing pod sandbox 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.081343446Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.081359304Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.089424504Z" level=info msg="runSandbox: removing pod sandbox from storage: 20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.092684760Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:22.092702878Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=30ff90a9-a96c-4fd2-aaef-853fb0d6216d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:22.092921 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:22.092971 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:22.093000 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:22.093047 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20b38671382c49c8286957ec04242eeb612ac8e9166f78bcc54007ae6c8d3a78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:27:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:25.995521 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:27:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:25.995838597Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:25.995876759Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:26.013044602Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/31f102a2-da27-4776-a782-d5260c122e77 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:26.013069606Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860536 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860558 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860564 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860572 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860579 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860585 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:27.860591 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.142418154Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:28.996432 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:28.996652 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:28.996532 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:28.996737 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:28.996911 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.996906660Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.996955540Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997028167Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997075956Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997034999Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997161125Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997034894Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:28.997245890Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:28.997399 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.018791927Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/00e930b0-149e-44ba-8326-10a02c63400a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.018815532Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.021321129Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9161066e-672f-4bb0-a660-a3ecfc45b6cd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.021340675Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.022115454Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1552ac6e-0bdb-4202-8594-e3cc95df1fce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.022136351Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.023830055Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/fd62b91c-a764-4c09-9842-6e863233a147 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.023848984Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:29.996513 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.996894328Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:29.996933122Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:30.008182719Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/1264c6fc-3c07-4d91-ac63-883403db268a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:30.008213304Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:30.995638 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:30.995943718Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:30.995980924Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:31.011164397Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3bbecc63-904d-449d-905f-64b29354d9c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:31.011192730Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:31.996352 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:31.996693793Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:31.996948416Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:32.009042630Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/feb11661-6b7e-4e8b-b418-dfc242d28c4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:32.009067899Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:34.996116 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:27:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:34.996265 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:34.996471443Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:34.996509967Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:34.996620791Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:34.996672328Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.010744300Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7939d2ef-bbc1-41e4-9611-2607cab17506 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.010768910Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.011317873Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/2eea8b81-6c6d-4b7e-93c6-4aac3de29474 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.011341071Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:35.996198 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.996599484Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:35.996650587Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:36.007368666Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/afb29534-4ffc-4bc4-b964-638acbd9f7e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:36.007390724Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:41.997068 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:27:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:41.997627 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.417626659Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.417670435Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.417771973Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.417800326Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.419053900Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.419089427Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.420469471Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.420497699Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.420719828Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.420751616Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-83aba413\x2de192\x2d49d1\x2d87ab\x2d902df0ee4b1d.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c4a677d\x2de03f\x2d45a0\x2d8d35\x2df13c5508636a.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3dc5da99\x2d54f7\x2d49ab\x2db7ba\x2d35b927545579.mount has successfully entered the 'dead' state. Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463361329Z" level=info msg="runSandbox: deleting pod ID cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34 from idIndex" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463361535Z" level=info msg="runSandbox: deleting pod ID fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad from idIndex" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463404861Z" level=info msg="runSandbox: removing pod sandbox cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463428742Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463441614Z" level=info msg="runSandbox: unmounting shmPath for sandbox cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463470809Z" level=info msg="runSandbox: removing pod sandbox fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463483939Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.463496249Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464302993Z" level=info msg="runSandbox: deleting pod ID 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3 from idIndex" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464330769Z" level=info msg="runSandbox: removing pod sandbox 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464342785Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464354281Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464303107Z" level=info msg="runSandbox: deleting pod ID ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6 from idIndex" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464410136Z" level=info msg="runSandbox: removing pod sandbox ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464422291Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.464432862Z" level=info msg="runSandbox: unmounting shmPath for sandbox ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.472304706Z" level=info msg="runSandbox: deleting pod ID a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3 from idIndex" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.472326576Z" level=info msg="runSandbox: removing pod sandbox a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.472337827Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.472347189Z" level=info msg="runSandbox: unmounting shmPath for sandbox a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.479418107Z" level=info msg="runSandbox: removing pod sandbox from storage: fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.479428449Z" level=info msg="runSandbox: removing pod sandbox from storage: ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.480425623Z" level=info msg="runSandbox: removing pod sandbox from storage: cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.480455515Z" level=info msg="runSandbox: removing pod sandbox from storage: 0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.482296246Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.482318710Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=88a367e1-2548-4b18-b042-c131c0d2bd28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.482592 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.482774 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.482801 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.482846 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.485401869Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.485419215Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=f93776fa-5594-4855-8b18-cc0ec8878d81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.485650 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.485695 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.485720 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.485766 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.487495444Z" level=info msg="runSandbox: removing pod sandbox from storage: a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.488514459Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.488531513Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d3893345-343f-4c03-83c5-cd13720e50eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.488794 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.488828 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.488849 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.488887 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.491590357Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.491607098Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9cdfc564-5b27-4ee6-b786-d0a0249086c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.491848 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.491880 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.491901 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.491947 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.494661695Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.494680147Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=bca2cf59-b835-47e8-b3d8-5acc9843e72c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.494841 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.494873 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.494895 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:52.494936 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:52.547547 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:52.547634 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:52.547709 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:52.547769 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.547905541Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.547941942Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:52.547938 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548064532Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548107939Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548179956Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548212535Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548219163Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548243856Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548106809Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.548379927Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.576395953Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/c9df0ea4-a262-4480-9607-17c0fb957ddb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.576428371Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.578871014Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5c0b06d5-983d-42bc-a0b2-22462a24625d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.578897864Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.580139821Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7933bf25-8606-49f5-a722-aa984a99a2c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.580162041Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.582534323Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/53dd6174-093f-49b5-b2e8-cd1222f45041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.582555470Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.584148176Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/357298c1-c9dd-421e-98bf-973fd7cb032f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:27:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:52.584170722Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-029bc923\x2d7fd3\x2d4202\x2d8611\x2d15aab50c289a.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ddd57881\x2dec95\x2d47d9\x2d9813\x2defe5ce374adf.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fb0036f1a1ba68e3a1ba4de20d1e41683bda060ad764bf7c1849b598e5a43fad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cd10ad8f3613c3bf7797bb601a013f829ec09e613d3ef61bce040ae0f0aedd34-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ada7db0f3c089eb059343c664177582d906f7338d4d9ace49a62feee82bdd9c6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0e947618fb919a59d7cfc416434bd60265cd02925e9702683efd5f8f658a39b3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a57b712743dbf0300a128c57a2a20e7f96701163d5af679fc3831b3b9d7495d3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:27:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:27:54.996716 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:27:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:27:54.997221 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:27:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:27:58.143826503Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:28:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:04.023754936Z" level=info msg="NetworkStart: stopping network for sandbox 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:04.023897519Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/bcb4aa51-2b76-4856-9c74-bf6e10add049 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:04.023922288Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:04.023929078Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:04.023935584Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491288.1194] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:28:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491288.1199] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:28:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491288.1200] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:28:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491288.1413] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:28:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491288.1414] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:28:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:09.997078 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:28:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:09.997765 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:28:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:11.027004906Z" level=info msg="NetworkStart: stopping network for sandbox 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:11.027163519Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/31f102a2-da27-4776-a782-d5260c122e77 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:11.027190732Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:11.027198836Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:11.027213618Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033465823Z" level=info msg="NetworkStart: stopping network for sandbox d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033614562Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9161066e-672f-4bb0-a660-a3ecfc45b6cd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033638811Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033645165Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033652080Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.033929069Z" level=info msg="NetworkStart: stopping network for sandbox 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034071615Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1552ac6e-0bdb-4202-8594-e3cc95df1fce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034098753Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034106224Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034113686Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034130059Z" level=info msg="NetworkStart: stopping network for sandbox 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034253622Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/00e930b0-149e-44ba-8326-10a02c63400a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034278078Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034287794Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.034295300Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.036313706Z" level=info msg="NetworkStart: stopping network for sandbox fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.036427511Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/fd62b91c-a764-4c09-9842-6e863233a147 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.036447787Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.036454530Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:14.036460440Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:15.021469171Z" level=info msg="NetworkStart: stopping network for sandbox 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:15.021629625Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/1264c6fc-3c07-4d91-ac63-883403db268a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:15.021657765Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:15.021665764Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:15.021673262Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:16.024697000Z" level=info msg="NetworkStart: stopping network for sandbox 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:16.024852703Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3bbecc63-904d-449d-905f-64b29354d9c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:16.024877142Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:16.024884592Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:16.024891453Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:17.021694401Z" level=info msg="NetworkStart: stopping network for sandbox d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:17.021862453Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/feb11661-6b7e-4e8b-b418-dfc242d28c4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:17.021885783Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:17.021893411Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:17.021900481Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024484700Z" level=info msg="NetworkStart: stopping network for sandbox 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024721701Z" level=info msg="NetworkStart: stopping network for sandbox a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024869402Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/2eea8b81-6c6d-4b7e-93c6-4aac3de29474 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024880432Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7939d2ef-bbc1-41e4-9611-2607cab17506 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024894600Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024903307Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024910222Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024904382Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024978693Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:20.024985915Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:21.019307944Z" level=info msg="NetworkStart: stopping network for sandbox 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:21.019443761Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/afb29534-4ffc-4bc4-b964-638acbd9f7e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:21.019467132Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:21.019473371Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:21.019479728Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:23.996879 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:28:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:24.000155 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861024 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861045 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861052 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861066 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861073 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861078 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:27.861088 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:28:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:28.141749801Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:28:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:35.997079 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:28:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:35.997786 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.590834003Z" level=info msg="NetworkStart: stopping network for sandbox dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.590973924Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/c9df0ea4-a262-4480-9607-17c0fb957ddb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.590995248Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.591001856Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.591008423Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.592879999Z" level=info msg="NetworkStart: stopping network for sandbox c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.593019704Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5c0b06d5-983d-42bc-a0b2-22462a24625d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.593045866Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.593054480Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.593061074Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.594791205Z" level=info msg="NetworkStart: stopping network for sandbox 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.594906190Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7933bf25-8606-49f5-a722-aa984a99a2c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.594928203Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.594935047Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.594940840Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595130574Z" level=info msg="NetworkStart: stopping network for sandbox cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595265360Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/53dd6174-093f-49b5-b2e8-cd1222f45041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595291639Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595299321Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595307360Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595848800Z" level=info msg="NetworkStart: stopping network for sandbox 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595963524Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/357298c1-c9dd-421e-98bf-973fd7cb032f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595984222Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595990926Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:28:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:37.595997908Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:28:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:28:47.997442 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:28:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:47.997930 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.034180761Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.034229235Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount has successfully entered the 'dead' state. Jan 23 16:28:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount has successfully entered the 'dead' state. Jan 23 16:28:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bcb4aa51\x2d2b76\x2d4856\x2d9c74\x2dbf6e10add049.mount has successfully entered the 'dead' state. Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.082360271Z" level=info msg="runSandbox: deleting pod ID 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220 from idIndex" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.082390471Z" level=info msg="runSandbox: removing pod sandbox 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.082404152Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.082415229Z" level=info msg="runSandbox: unmounting shmPath for sandbox 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.098471769Z" level=info msg="runSandbox: removing pod sandbox from storage: 93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.101116464Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:49.101139402Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=57efa743-56d6-4e6b-b6fb-8c9e313e2d21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:49.101337 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:49.101383 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:28:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:49.101405 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:28:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:49.101452 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(93e3c9442335a0dbbfbb9ed854ad6cac5a4e82a6ce2f64d198295d8fdc64a220): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.038195498Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.038443512Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount has successfully entered the 'dead' state. Jan 23 16:28:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount has successfully entered the 'dead' state. Jan 23 16:28:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-31f102a2\x2dda27\x2d4776\x2da782\x2dd5260c122e77.mount has successfully entered the 'dead' state. Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.077300301Z" level=info msg="runSandbox: deleting pod ID 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b from idIndex" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.077324964Z" level=info msg="runSandbox: removing pod sandbox 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.077338604Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.077352991Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.097443741Z" level=info msg="runSandbox: removing pod sandbox from storage: 4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.100748599Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:56.100788555Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=71181d44-5279-4b1d-9454-afef904d88a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:56.100917 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:56.100963 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:28:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:56.100985 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:28:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:56.101036 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4b31f05f8f66d68bd81d0cab2e37e81c9fffa1202a7b5344541477b1c424e28b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:28:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:58.143042933Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.044426217Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.044462575Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.044897656Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.044936457Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.045977774Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.046012072Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.046578274Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.046610147Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount has successfully entered the 'dead' state. Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090325611Z" level=info msg="runSandbox: deleting pod ID d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0 from idIndex" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090355803Z" level=info msg="runSandbox: removing pod sandbox d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090331373Z" level=info msg="runSandbox: deleting pod ID 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1 from idIndex" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090404667Z" level=info msg="runSandbox: removing pod sandbox 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090415056Z" level=info msg="runSandbox: deleting pod ID 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc from idIndex" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090439641Z" level=info msg="runSandbox: removing pod sandbox 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090453918Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090468436Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090420738Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090523600Z" level=info msg="runSandbox: unmounting shmPath for sandbox 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090369765Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.090575336Z" level=info msg="runSandbox: unmounting shmPath for sandbox d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.099301871Z" level=info msg="runSandbox: deleting pod ID fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4 from idIndex" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.099328035Z" level=info msg="runSandbox: removing pod sandbox fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.099340573Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.099352092Z" level=info msg="runSandbox: unmounting shmPath for sandbox fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.102420461Z" level=info msg="runSandbox: removing pod sandbox from storage: 51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.102481988Z" level=info msg="runSandbox: removing pod sandbox from storage: 5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.105672808Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.105690927Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=3a066baa-d2cf-41e3-8203-247f77e8da04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.105935 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.105987 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.106009 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.106055 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.106479241Z" level=info msg="runSandbox: removing pod sandbox from storage: d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.108693905Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.108711330Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3b7aa5ff-820f-45f4-8bfe-d22d9ee9126d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.108918 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.108956 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.108978 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.109021 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.110494172Z" level=info msg="runSandbox: removing pod sandbox from storage: fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.111741776Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.111758895Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f111a69b-62ef-46af-9626-49bf8c1c9a6f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.112014 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.112178 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.112214 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.112263 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.114718965Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:28:59.114736112Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=cdc22370-4c0e-49d6-96a8-74faaece6362 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.114935 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.114974 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.114996 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:28:59.115038 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.032821764Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.032860445Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fd62b91c\x2da764\x2d4c09\x2d9842\x2d6e863233a147.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1552ac6e\x2d0bdb\x2d4202\x2d8594\x2de3cc95df1fce.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9161066e\x2d672f\x2d4bb0\x2da660\x2da3ecfc45b6cd.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-00e930b0\x2d149e\x2d44ba\x2d8326\x2d10a02c63400a.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d2f9ce62ed9433463de11c4875a394c36fe49c6442c67810d672a6fa78575ab0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fc1301693673724268b3056e2a15fcad0274144b1887afe23b64183aeac575b4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-51bf07c3b8dbb1b74f1b4f50bba6d919b74af508b7b7a58edc7218fcd54b67b1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5e2efd03d8189760151db3d18c344e4e0b72dad3ca0c2c489041101c575109cc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1264c6fc\x2d3c07\x2d4d91\x2dac63\x2d883403db268a.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.067282298Z" level=info msg="runSandbox: deleting pod ID 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb from idIndex" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.067311589Z" level=info msg="runSandbox: removing pod sandbox 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.067328134Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.067341640Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.079449314Z" level=info msg="runSandbox: removing pod sandbox from storage: 35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.085654998Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:00.085683186Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=02ee3796-5455-447c-ae73-06717e250733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:00.085881 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:00.086048 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:00.086074 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:00.086128 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(35c8cba44ef4254fd737d10a9627dacfc3b067bce81b61669c9f1c50182f1feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.036412019Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.036456862Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount has successfully entered the 'dead' state. Jan 23 16:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount has successfully entered the 'dead' state. Jan 23 16:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3bbecc63\x2d904d\x2d449d\x2d905f\x2d64b29354d9c6.mount has successfully entered the 'dead' state. Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.074431499Z" level=info msg="runSandbox: deleting pod ID 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60 from idIndex" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.074457913Z" level=info msg="runSandbox: removing pod sandbox 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.074474713Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.074487260Z" level=info msg="runSandbox: unmounting shmPath for sandbox 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.089400360Z" level=info msg="runSandbox: removing pod sandbox from storage: 115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.092785856Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:01.092805362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=07caa4cc-b9cb-41ed-bc80-7683c1bb330e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:01.093077 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:01.093138 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:01.093172 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:01.093238 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(115c53c7126fac15b554112b94ba3960cfe3eddc7e7b2ac3e57d278e836d4e60): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:01.996355 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:01.996892 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.033490333Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.033533568Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount has successfully entered the 'dead' state. Jan 23 16:29:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount has successfully entered the 'dead' state. Jan 23 16:29:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-feb11661\x2d6b7e\x2d4e8b\x2db418\x2ddfc242d28c4d.mount has successfully entered the 'dead' state. Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.079306976Z" level=info msg="runSandbox: deleting pod ID d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8 from idIndex" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.079331628Z" level=info msg="runSandbox: removing pod sandbox d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.079346364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.079357859Z" level=info msg="runSandbox: unmounting shmPath for sandbox d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.096427810Z" level=info msg="runSandbox: removing pod sandbox from storage: d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.099709289Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.099726893Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=45be9cf3-cb99-4a6e-a52c-a0c616ed722c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:02.099948 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:02.099986 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:29:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:02.100007 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:29:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:02.100051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d6bc48a3cbc6204269e2f052d5f7451e5f948e8571e1996bceb5535014d36ff8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:29:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:02.995426 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.995780799Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:02.995824359Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:03.007552302Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/3ce91a7d-0b9d-49cb-aeb2-b341dc392d95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:03.007572597Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.036049382Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.036087436Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.036566387Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.036600513Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2eea8b81\x2d6c6d\x2d4b7e\x2d93c6\x2d4aac3de29474.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7939d2ef\x2dbbc1\x2d41e4\x2d9611\x2d2607cab17506.mount has successfully entered the 'dead' state. Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073293903Z" level=info msg="runSandbox: deleting pod ID 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0 from idIndex" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073320218Z" level=info msg="runSandbox: removing pod sandbox 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073333620Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073346604Z" level=info msg="runSandbox: unmounting shmPath for sandbox 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073306890Z" level=info msg="runSandbox: deleting pod ID a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7 from idIndex" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073398473Z" level=info msg="runSandbox: removing pod sandbox a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073411919Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.073424340Z" level=info msg="runSandbox: unmounting shmPath for sandbox a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.089459411Z" level=info msg="runSandbox: removing pod sandbox from storage: 09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.089473956Z" level=info msg="runSandbox: removing pod sandbox from storage: a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.092323393Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.092342749Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=225678f0-5fe9-4580-8833-b7408956895b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.092574 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.092620 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.092645 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.092694 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.095277503Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:05.095294471Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b0825728-d3e4-4d52-b480-8492fb5923de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.095478 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.095510 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.095530 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:05.095565 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.029288640Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.029315867Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-09dfb651bef16e7b84232bf9ed8d60ebede5a31d9d4b110707dedc40cb29c2b0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a42af0322702ce70a3c6b1345236e6f2076dea20f25d9d566f0718d5a04429e7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-afb29534\x2d4ffc\x2d4bc4\x2db964\x2d638acbd9f7e4.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.072303087Z" level=info msg="runSandbox: deleting pod ID 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481 from idIndex" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.072327566Z" level=info msg="runSandbox: removing pod sandbox 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.072340605Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.072351442Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.083433645Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.086664385Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:06.086683620Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=243cf4c9-ebef-454b-b125-f6f50e3dc05a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:06.086868 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:06.086912 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:29:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:06.086936 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:29:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:06.086985 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9b2c7a106cc8abe82d13eb66423386acd3306cd5dbdbcb6a5210d1a024520481): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:29:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:10.995693 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:29:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:10.995974 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:29:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:10.996053447Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:10.996319936Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:10.996183664Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:10.996511006Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.011409278Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/5bbe0972-18a1-4aaa-a5bf-44cdb53db10a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.011429827Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.012311396Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6a4fd4ec-5ab1-4812-b346-0eb7754b229b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.012329386Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:11.996342 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.996734856Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:11.996788035Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.011906181Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/33178643-fee3-4442-ae09-7f91c62231b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.011953378Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:12.996382 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:12.996534 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:29:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:12.996743 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.996730021Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.996777638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.996807273Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.996848416Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.997099895Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:12.997133752Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.015391751Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/8db0212f-f83e-47c1-97c5-9ea363817c6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.015411271Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.017507148Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/882ff39c-0c55-4e44-b7de-44d0185e1e46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.017528604Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.018819949Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/d2b3ddfe-e346-4ecd-8702-fc7f1e179a28 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.018839682Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:13.995841 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.996223262Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:13.996264516Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:14.007389235Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/a564f0e1-a09c-4b2d-9264-0b9c17b3c4c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:14.007417437Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:15.996290 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:29:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:15.996681319Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:15.996735705Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:16.008426177Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/3d1044e1-027c-4f04-8b04-4b008ea30984 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:16.008448588Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:16.995547 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:16.995878139Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:16.995917322Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:16.996408 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:16.996912 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.008283955Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/309f4fea-e92f-4af6-af2e-a0c98b07c71c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.008305391Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:17.996346 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:29:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:17.996545 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.996642429Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.996676232Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.996789055Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:17.996816607Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:18.010925423Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/a58a5c30-8408-4921-b9fb-f4c9c19dd25c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:18.010944696Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:18.012529468Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/1eb1204e-53ba-4519-8ded-828e981a6d1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:18.012550998Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.601628960Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.601673886Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.603995363Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.604027590Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.605714700Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.605749367Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.606294725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.606331860Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.607051663Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.607089491Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount has successfully entered the 'dead' state. Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648339672Z" level=info msg="runSandbox: deleting pod ID 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42 from idIndex" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648347930Z" level=info msg="runSandbox: deleting pod ID c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2 from idIndex" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648348997Z" level=info msg="runSandbox: deleting pod ID dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd from idIndex" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648414856Z" level=info msg="runSandbox: removing pod sandbox dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648434376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648450380Z" level=info msg="runSandbox: unmounting shmPath for sandbox dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648377925Z" level=info msg="runSandbox: removing pod sandbox 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648515887Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648533847Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648392800Z" level=info msg="runSandbox: removing pod sandbox c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648608525Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648623256Z" level=info msg="runSandbox: unmounting shmPath for sandbox c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648351536Z" level=info msg="runSandbox: deleting pod ID 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee from idIndex" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648698666Z" level=info msg="runSandbox: removing pod sandbox 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648712617Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.648725717Z" level=info msg="runSandbox: unmounting shmPath for sandbox 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.652311043Z" level=info msg="runSandbox: deleting pod ID cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30 from idIndex" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.652338460Z" level=info msg="runSandbox: removing pod sandbox cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.652352354Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.652365717Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.661536408Z" level=info msg="runSandbox: removing pod sandbox from storage: c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.661590204Z" level=info msg="runSandbox: removing pod sandbox from storage: 8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.661592468Z" level=info msg="runSandbox: removing pod sandbox from storage: dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.661552373Z" level=info msg="runSandbox: removing pod sandbox from storage: 149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.668555177Z" level=info msg="runSandbox: removing pod sandbox from storage: cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.668714239Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.668739584Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=d32fc9ea-6a7c-4d9d-88ff-3ce1bf3c532d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.669009 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.669174 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.669198 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.669264 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.671838585Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.671858924Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8b3033d4-cc99-4407-8921-9f75995b4c58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.672117 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.672159 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.672183 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.672239 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.674786476Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.674804057Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d68362aa-0217-4209-a6ce-26432b86b632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.675004 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.675036 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.675059 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.675095 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.677752818Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.677770505Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=a557323d-d21f-4997-93ff-a1754dcbe359 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.677984 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.678015 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.678037 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.678073 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.680680655Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.680697078Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=4fd0a6c1-2a94-4b88-854b-3427e336ae7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.680858 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.680890 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.680910 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:22.680948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:22.716803 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:22.716979 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:22.717078 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:22.717099 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717098635Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717129843Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717249927Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717286298Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717357050Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717385438Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:22.717257 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717448226Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717473403Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717505504Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.717476167Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.743764408Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/fae26f4f-e9a9-4085-ac7c-905c3668450a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.743785758Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.745298699Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/de94877f-5d1d-497c-b01e-c8564a06b696 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.745316744Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.746508125Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/df2ffc48-4109-459b-a27d-80214afa5a06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.746527972Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.747321338Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/800d806d-24c2-4456-8237-71d1a5f9b507 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.747340101Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.748136208Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9b8f9e51-cdcb-4943-bbdd-d3c87a7c373e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:22.748156980Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-357298c1\x2dc9dd\x2d421e\x2d98bf\x2d973fd7cb032f.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-53dd6174\x2d093f\x2d49b5\x2db2e8\x2dcd1222f45041.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7933bf25\x2d8606\x2d49f5\x2da722\x2daa984a99a2c8.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5c0b06d5\x2d983d\x2d42bc\x2da0b2\x2d22462a24625d.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c9df0ea4\x2da262\x2d4480\x2d9607\x2d17c0fb957ddb.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c1cd2aef6cfb100eecd6758e526bdbec81737e63641674b0c69eac7785cf3ef2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-149da20f9ec3d42f069b0ca63f30359d89ea765575320487d423778f8d07ffee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cc4549334b393addb5b7edc30c6503c228e98b8f0d5169a3a491dae5aafe9e30-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8f2d75acf9a4af54f56a979517a87407e53a4730803c87a7e3376b84882aee42-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dc1a3fd90599d0f34c2549631a743450833f475626d11e9e7551a898993ddbcd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861377 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861396 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861404 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861410 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861418 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861424 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.861432 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:27.997454 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:27.997952 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:29:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:28.142836163Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:29:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:40.996304 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:29:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:40.996960 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:29:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:48.020613384Z" level=info msg="NetworkStart: stopping network for sandbox 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:48.020813639Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/3ce91a7d-0b9d-49cb-aeb2-b341dc392d95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:48.020838385Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:48.020845847Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:48.020852036Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:29:53.996881 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:29:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:29:53.997435 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024687029Z" level=info msg="NetworkStart: stopping network for sandbox adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024828360Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/5bbe0972-18a1-4aaa-a5bf-44cdb53db10a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024853971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024861742Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024868273Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.024931100Z" level=info msg="NetworkStart: stopping network for sandbox 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.025051708Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6a4fd4ec-5ab1-4812-b346-0eb7754b229b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.025074535Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.025081194Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:56.025087271Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:57.026461164Z" level=info msg="NetworkStart: stopping network for sandbox a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:57.026625116Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/33178643-fee3-4442-ae09-7f91c62231b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:57.026652402Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:57.026660050Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:57.026668999Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.028448323Z" level=info msg="NetworkStart: stopping network for sandbox 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.028790184Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/8db0212f-f83e-47c1-97c5-9ea363817c6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.028812038Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.028819163Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.028825338Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032033339Z" level=info msg="NetworkStart: stopping network for sandbox 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032164343Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/882ff39c-0c55-4e44-b7de-44d0185e1e46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032195780Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032217300Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032230022Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032269419Z" level=info msg="NetworkStart: stopping network for sandbox 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032390338Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/d2b3ddfe-e346-4ecd-8702-fc7f1e179a28 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032411900Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032418458Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.032424275Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:58.142724137Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:29:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:59.020846163Z" level=info msg="NetworkStart: stopping network for sandbox fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:29:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:59.021050630Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/a564f0e1-a09c-4b2d-9264-0b9c17b3c4c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:29:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:59.021076427Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:29:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:59.021084946Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:29:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:29:59.021091971Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:01.022267239Z" level=info msg="NetworkStart: stopping network for sandbox c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:01.022408085Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/3d1044e1-027c-4f04-8b04-4b008ea30984 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:01.022431350Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:01.022438204Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:01.022445856Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:02.019856573Z" level=info msg="NetworkStart: stopping network for sandbox 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:02.019994398Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/309f4fea-e92f-4af6-af2e-a0c98b07c71c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:02.020015688Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:02.020023309Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:02.020029059Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.024321140Z" level=info msg="NetworkStart: stopping network for sandbox 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.024457797Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/a58a5c30-8408-4921-b9fb-f4c9c19dd25c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.024481751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.024488493Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.024494733Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.025755555Z" level=info msg="NetworkStart: stopping network for sandbox 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.025900058Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/1eb1204e-53ba-4519-8ded-828e981a6d1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.025925314Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.025932849Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:03.025939345Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:04.997095 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:30:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:04.997654 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.757086418Z" level=info msg="NetworkStart: stopping network for sandbox 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.757241385Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/fae26f4f-e9a9-4085-ac7c-905c3668450a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.757266773Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.757273412Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.757279106Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.758023972Z" level=info msg="NetworkStart: stopping network for sandbox e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.758130221Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/de94877f-5d1d-497c-b01e-c8564a06b696 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.758151786Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.758158253Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.758165277Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.759749098Z" level=info msg="NetworkStart: stopping network for sandbox 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.759894156Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9b8f9e51-cdcb-4943-bbdd-d3c87a7c373e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.759925285Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.759935292Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.759944845Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.760485615Z" level=info msg="NetworkStart: stopping network for sandbox 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.760631306Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/800d806d-24c2-4456-8237-71d1a5f9b507 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.760657016Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.760664352Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.760672362Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.761125708Z" level=info msg="NetworkStart: stopping network for sandbox 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.761251971Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/df2ffc48-4109-459b-a27d-80214afa5a06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.761280780Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.761289217Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:30:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:07.761296298Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:18.996387 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:18.997084 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861500 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861524 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861531 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861538 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861543 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861549 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:27.861557 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:30:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:27.867023108Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=5826a43e-eb67-44b3-a1ce-5df51607b5a2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:30:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:27.867135975Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5826a43e-eb67-44b3-a1ce-5df51607b5a2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:30:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:28.141926379Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:30:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:29.996554 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:30:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:29.999622 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.032511126Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.032766042Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount has successfully entered the 'dead' state. Jan 23 16:30:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount has successfully entered the 'dead' state. Jan 23 16:30:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ce91a7d\x2d0b9d\x2d49cb\x2daeb2\x2db341dc392d95.mount has successfully entered the 'dead' state. Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.084309433Z" level=info msg="runSandbox: deleting pod ID 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823 from idIndex" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.084339627Z" level=info msg="runSandbox: removing pod sandbox 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.084353829Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.084370274Z" level=info msg="runSandbox: unmounting shmPath for sandbox 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.096457837Z" level=info msg="runSandbox: removing pod sandbox from storage: 04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.099310496Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:33.099329278Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a97c219f-194b-4997-913a-2760aa9a4037 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:33.099587 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:33.099648 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:33.099673 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:33.099728 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(04b718f85790274fb1940b018fd295f4f0e71b0b47bfe149f8f5805f14fac823): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.035780461Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.035825667Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.035836746Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.035891178Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6a4fd4ec\x2d5ab1\x2d4812\x2db346\x2d0eb7754b229b.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.083320957Z" level=info msg="runSandbox: deleting pod ID adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7 from idIndex" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.083346963Z" level=info msg="runSandbox: removing pod sandbox adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.083360971Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.083375373Z" level=info msg="runSandbox: unmounting shmPath for sandbox adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.084284368Z" level=info msg="runSandbox: deleting pod ID 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883 from idIndex" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.084307289Z" level=info msg="runSandbox: removing pod sandbox 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.084318834Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.084330851Z" level=info msg="runSandbox: unmounting shmPath for sandbox 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.095466036Z" level=info msg="runSandbox: removing pod sandbox from storage: adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.096458618Z" level=info msg="runSandbox: removing pod sandbox from storage: 72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.098868454Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.098886657Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=fd6c9e11-f23f-41ee-9c5e-a7834b0fcd5e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.099103 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.099143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.099164 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.099215 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.101919967Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:41.101938632Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2a036d6b-a736-47f7-9d09-7a36a13fa9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.102154 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.102354 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.102375 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:30:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:41.102418 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5bbe0972\x2d18a1\x2d4aaa\x2da5bf\x2d44cdb53db10a.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-72165f38821a73a8a7813dcce6ba75c3ea4ebb86ab242cb430e6f8468932d883-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-adfe4855292ac0faa82d86628c2eb7017e536921133ca1cfd62a7a79d2f8eab7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.037587541Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.037631055Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount has successfully entered the 'dead' state. Jan 23 16:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount has successfully entered the 'dead' state. Jan 23 16:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-33178643\x2dfee3\x2d4442\x2dae09\x2d7f91c62231b3.mount has successfully entered the 'dead' state. Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.087319053Z" level=info msg="runSandbox: deleting pod ID a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a from idIndex" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.087348494Z" level=info msg="runSandbox: removing pod sandbox a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.087367376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.087384336Z" level=info msg="runSandbox: unmounting shmPath for sandbox a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.103464419Z" level=info msg="runSandbox: removing pod sandbox from storage: a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.106832590Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:42.106851188Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=195c1312-f73c-4dbc-87ec-2237dc093e52 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:42.107066 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:42.107111 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:42.107137 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:42.107189 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a376aa61c1f8879b67241a02e20492a6875cf3e97e5671ffb6015d6321d24b4a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:42.997041 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:42.997610 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.039894090Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.039930525Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.042518891Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.042550613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.042979173Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.043015435Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d2b3ddfe\x2de346\x2d4ecd\x2d8702\x2dfc7f1e179a28.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-882ff39c\x2d0c55\x2d4e44\x2db7de\x2d44d0185e1e46.mount has successfully entered the 'dead' state. Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088326636Z" level=info msg="runSandbox: deleting pod ID 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7 from idIndex" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088358834Z" level=info msg="runSandbox: removing pod sandbox 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088378916Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088397407Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088327812Z" level=info msg="runSandbox: deleting pod ID 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc from idIndex" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088447632Z" level=info msg="runSandbox: removing pod sandbox 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088461076Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.088473841Z" level=info msg="runSandbox: unmounting shmPath for sandbox 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.096302595Z" level=info msg="runSandbox: deleting pod ID 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5 from idIndex" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.096324013Z" level=info msg="runSandbox: removing pod sandbox 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.096338118Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.096349804Z" level=info msg="runSandbox: unmounting shmPath for sandbox 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.100452925Z" level=info msg="runSandbox: removing pod sandbox from storage: 418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.103627124Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.103644441Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=99c0ba15-3cb1-4525-a0e9-6bac97bd9d89 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.103906 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.103952 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.103978 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.104026 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.108440082Z" level=info msg="runSandbox: removing pod sandbox from storage: 3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.111697955Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.111717643Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=20564661-324e-4a3e-a867-e6bf1d01fbef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.111922 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.111961 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.111983 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.112022 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.113438456Z" level=info msg="runSandbox: removing pod sandbox from storage: 404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.116432316Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:43.116449195Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=08275fd9-324e-433c-9a0c-a696cb505aa2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.116628 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.116663 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.116685 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:30:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:43.116722 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.032149916Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.032194890Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8db0212f\x2df83e\x2d47c1\x2d97c5\x2d9ea363817c6f.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3fc3325bc3dee8013a81caa8e50d34df37cdcd12d3c4a220c7e62266a7b36fd7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-418f790021da4fa8d49635b20a27298ff78d4c44dcf7eb433b087f9b564665bc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-404efba436d4b09596afbc1d71755025999bc227ec6e90a094d35c2657d7f6a5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a564f0e1\x2da09c\x2d4b2d\x2d9264\x2d0b9c17b3c4c6.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.072317359Z" level=info msg="runSandbox: deleting pod ID fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a from idIndex" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.072346964Z" level=info msg="runSandbox: removing pod sandbox fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.072364990Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.072381337Z" level=info msg="runSandbox: unmounting shmPath for sandbox fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.084465518Z" level=info msg="runSandbox: removing pod sandbox from storage: fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.087873477Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.087893826Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=2ec0f7c7-617c-4b1b-a4cd-9aa9de3b4faa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:44.088102 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:44.088145 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:44.088169 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:44.088223 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fdd6b1ef609e2c89233a763f2fa272c00476e301564b4fe6a7bd46eeb814917a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:30:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:44.995442 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.995756562Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:44.995797592Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:45.010874345Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/44886cee-8ecf-419f-8639-808a8013d773 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:45.010900002Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.033598717Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.033646045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount has successfully entered the 'dead' state. Jan 23 16:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount has successfully entered the 'dead' state. Jan 23 16:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d1044e1\x2d027c\x2d4f04\x2d8b04\x2d4b008ea30984.mount has successfully entered the 'dead' state. Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.076312356Z" level=info msg="runSandbox: deleting pod ID c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928 from idIndex" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.076342175Z" level=info msg="runSandbox: removing pod sandbox c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.076357578Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.076371741Z" level=info msg="runSandbox: unmounting shmPath for sandbox c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.088471204Z" level=info msg="runSandbox: removing pod sandbox from storage: c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.091669929Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:46.091693153Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=bb60829f-e610-4952-8cfa-8e16a7866ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:46.091907 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:46.091950 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:46.091972 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:46.092024 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c9f2c00652ce8f0123be31865ecbe29e21dcff91280857b4f6e97992da4dc928): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.031506089Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.031542918Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount has successfully entered the 'dead' state. Jan 23 16:30:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount has successfully entered the 'dead' state. Jan 23 16:30:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-309f4fea\x2de92f\x2d4af6\x2daf2e\x2da0c98b07c71c.mount has successfully entered the 'dead' state. Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.076308792Z" level=info msg="runSandbox: deleting pod ID 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d from idIndex" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.076333432Z" level=info msg="runSandbox: removing pod sandbox 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.076347134Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.076359250Z" level=info msg="runSandbox: unmounting shmPath for sandbox 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.092434945Z" level=info msg="runSandbox: removing pod sandbox from storage: 77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.095670457Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:47.095689385Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=624ae665-627b-4686-8583-9824c27d88b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:47.095918 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:47.095974 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:30:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:47.095997 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:30:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:47.096051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(77e117f374549cae3333dc5b15bf2bcfb64ca44c72270381637510cc726e556d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.034969934Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.035002383Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.037566352Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.037609845Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a58a5c30\x2d8408\x2d4921\x2db9fb\x2df4c9c19dd25c.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1eb1204e\x2d53ba\x2d4519\x2d8ded\x2d828e981a6d1b.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.072415863Z" level=info msg="runSandbox: deleting pod ID 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c from idIndex" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.072441056Z" level=info msg="runSandbox: removing pod sandbox 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.072454289Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.072468430Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.076281682Z" level=info msg="runSandbox: deleting pod ID 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268 from idIndex" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.076309818Z" level=info msg="runSandbox: removing pod sandbox 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.076325547Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.076341320Z" level=info msg="runSandbox: unmounting shmPath for sandbox 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.080423236Z" level=info msg="runSandbox: removing pod sandbox from storage: 41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.083723131Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.083742505Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b23ddee1-614c-49b8-9c0d-5eb2a92ec4c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.083968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.084013 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.084036 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.084083 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41a07daa4df6fee28b5786b5261c94a784b2b76445d3bb98f97dd5dabc9b1b8c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.093430935Z" level=info msg="runSandbox: removing pod sandbox from storage: 15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.096631309Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:48.096649872Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d05d199a-a91c-4451-beef-cceb2d2ffdd5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.096831 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.096862 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.096882 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:30:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:48.096918 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(15851c8529e9c90d392242b0af26fc360bf55a913a0d18de37e50807fb4e4268): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.768911500Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.768950613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.768955571Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.768995024Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.771289524Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.771320073Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.771655203Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.771699876Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.772484501Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.772515032Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount has successfully entered the 'dead' state. Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.811291247Z" level=info msg="runSandbox: deleting pod ID 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36 from idIndex" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.811319094Z" level=info msg="runSandbox: removing pod sandbox 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.811333819Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.811350333Z" level=info msg="runSandbox: unmounting shmPath for sandbox 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.812284324Z" level=info msg="runSandbox: deleting pod ID e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c from idIndex" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.812308987Z" level=info msg="runSandbox: removing pod sandbox e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.812323457Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.812335823Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819289870Z" level=info msg="runSandbox: deleting pod ID 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5 from idIndex" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819326927Z" level=info msg="runSandbox: removing pod sandbox 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819343229Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819359824Z" level=info msg="runSandbox: unmounting shmPath for sandbox 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819302011Z" level=info msg="runSandbox: deleting pod ID 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91 from idIndex" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819413751Z" level=info msg="runSandbox: removing pod sandbox 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819426815Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.819440046Z" level=info msg="runSandbox: unmounting shmPath for sandbox 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.820284852Z" level=info msg="runSandbox: deleting pod ID 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652 from idIndex" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.820311603Z" level=info msg="runSandbox: removing pod sandbox 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.820326951Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.820340070Z" level=info msg="runSandbox: unmounting shmPath for sandbox 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.827452632Z" level=info msg="runSandbox: removing pod sandbox from storage: e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.827506273Z" level=info msg="runSandbox: removing pod sandbox from storage: 630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.830547188Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.830564986Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=4054bbe9-2536-4cc3-a33b-be3e3f12ecdf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.830800 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.830845 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.830869 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.830915 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.831438110Z" level=info msg="runSandbox: removing pod sandbox from storage: 73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.831502623Z" level=info msg="runSandbox: removing pod sandbox from storage: 00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.832504845Z" level=info msg="runSandbox: removing pod sandbox from storage: 00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.833551459Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.833570763Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=cec68cf7-c0c8-4c92-9349-1a5a3bf3abad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.833818 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.833848 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.833869 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.833905 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.836486602Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.836503272Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=fe50a427-7365-4a2b-923b-b7b051ff1dbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.836713 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.836746 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.836767 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.836805 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.839292344Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.839310340Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=7f2a94da-cbd9-461e-83fe-9e71763ae471 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.839534 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.839566 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.839587 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.839626 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.842190558Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.842211775Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=f6dfbc43-3bc1-4312-8309-b10c569d4a28 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.842421 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.842451 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.842472 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:52.842510 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.886235 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.886258 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.886316 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.886484 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886599393Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886629990Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.886650 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886714764Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886744196Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886823781Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886847547Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886860695Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886911457Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886927391Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.886862330Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.913330218Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d449d63a-6993-4f95-8bba-a912a7050f9e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.913358088Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.914051160Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/bff6e9ce-75b4-4fd8-8407-dd816c200f4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.914073609Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.916842303Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fbebf627-b668-44a6-a850-2f67ec07cd20 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.916861742Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.917856568Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/e0e15bec-cf51-493b-8f7b-0fd01d542a7a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.917876569Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.920315553Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/e05aaecd-b58f-4f63-b920-51ea670d2bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.920341454Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:52.996195 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.996570353Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:52.996606355Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:53.006711454Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/727b6877-c6ec-4dd8-84fb-fc9337fbfdeb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:53.006729913Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9b8f9e51\x2dcdcb\x2d4943\x2dbbdd\x2dd3c87a7c373e.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-800d806d\x2d24c2\x2d4456\x2d8237\x2d71d1a5f9b507.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-df2ffc48\x2d4109\x2d459b\x2da27d\x2d80214afa5a06.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de94877f\x2d5d1d\x2d497c\x2db01e\x2dc8564a06b696.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fae26f4f\x2de9a9\x2d4085\x2dac7c\x2d905c3668450a.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5743975c760a19f266fe7a28fff2f6787bc43bd3c16c998f06244d005c23a7c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-73faad7a43316c7f639a11832e6e80b3175c26964a900be8d5ac0bc147885c91-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-00c3e5b99fb4f585739a47ac7bd7934cd8f3f3856a0f29b09c1fd43e8d467652-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-00855f13c5089c5b2dd5fc273ede1218950929c56cb94c13f9e0b34d4f3d68d5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-630c241a5f2cfccb04f52227e99656001411c172abc47a1a6752edd97bb0fb36-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:30:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:53.995499 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:53.995827922Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:53.995867409Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:54.006708485Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/66909d9a-bdd0-442c-900f-99dff72bda0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:54.006728352Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:55.995713 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:55.995978 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:55.996042061Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:55.996100016Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:55.996421345Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:55.996463054Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.011882546Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/67803115-8222-4c37-a880-8157e7e489c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.011903706Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.011885453Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/c3292b6b-06ab-4e84-9757-f1afebe5f003 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.012059315Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:56.995858 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:56.995940 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:56.995982 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:56.996244 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996230747Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996265350Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996298206Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996328710Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996408223Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996426280Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996440203Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:56.996466367Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:56.996808 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:30:56.997305 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.020424491Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b089846c-7346-4573-9c07-1cd23f7d01d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.020449702Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.022999739Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/05f03049-f5dd-441f-b140-b9466ed1d6a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.023024492Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.025447617Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/b19c813a-048d-4784-bd23-283a4c6c35b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.025469145Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.026162991Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/547b0bcd-2d82-44ed-9e60-ae066dbfc0bb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.026185423Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:30:57.996750 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.997177169Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:57.997224884Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:30:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:58.008463710Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/93888ead-3f31-48de-9058-d2ed06229407 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:30:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:58.008483059Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:30:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:30:58.143665103Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:31:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:00.996334 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:00.996775302Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:00.996813454Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:01.008078936Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ad7803d6-c9a1-4ad5-8454-cb2ed68d79e9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:01.008099091Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:02.996185 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:31:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:02.996528266Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:02.996565658Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:03.009204305Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1f8db7b7-b10c-4b31-879f-8236045d7b10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:03.009238986Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:10.996644 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:31:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:31:10.997197 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:31:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:25.996991 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:31:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:31:25.997682 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862349 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862374 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862384 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862393 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862399 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862405 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:27.862412 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:31:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:28.143692311Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:31:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:30.024015192Z" level=info msg="NetworkStart: stopping network for sandbox b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:30.024193444Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/44886cee-8ecf-419f-8639-808a8013d773 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:30.024226521Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:30.024233730Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:30.024241110Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:36.996689 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:31:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:31:36.997227 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927138525Z" level=info msg="NetworkStart: stopping network for sandbox 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927521164Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d449d63a-6993-4f95-8bba-a912a7050f9e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927550184Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927558959Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927565972Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927748000Z" level=info msg="NetworkStart: stopping network for sandbox 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927885976Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/bff6e9ce-75b4-4fd8-8407-dd816c200f4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927910018Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927918034Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.927924961Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.928734246Z" level=info msg="NetworkStart: stopping network for sandbox f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.928849356Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fbebf627-b668-44a6-a850-2f67ec07cd20 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.928872271Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.928878889Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.928884935Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.931751569Z" level=info msg="NetworkStart: stopping network for sandbox 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.931874735Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/e0e15bec-cf51-493b-8f7b-0fd01d542a7a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.931899517Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.931906905Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.931913141Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.932063708Z" level=info msg="NetworkStart: stopping network for sandbox 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.932167896Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/e05aaecd-b58f-4f63-b920-51ea670d2bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.932194010Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.932203265Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:37.932219643Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:38.018814304Z" level=info msg="NetworkStart: stopping network for sandbox e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:38.018926740Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/727b6877-c6ec-4dd8-84fb-fc9337fbfdeb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:38.018946074Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:38.018952188Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:38.018957676Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1214] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1220] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1220] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1222] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1227] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491498.1232] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:31:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:39.020836284Z" level=info msg="NetworkStart: stopping network for sandbox 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:39.020957267Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/66909d9a-bdd0-442c-900f-99dff72bda0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:39.020977283Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:39.020984613Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:39.020990808Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491500.0682] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026379272Z" level=info msg="NetworkStart: stopping network for sandbox 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026525229Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/c3292b6b-06ab-4e84-9757-f1afebe5f003 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026546879Z" level=info msg="NetworkStart: stopping network for sandbox a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026551383Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026653680Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026662435Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026664602Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/67803115-8222-4c37-a880-8157e7e489c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026754806Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026763571Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:41.026770873Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036174386Z" level=info msg="NetworkStart: stopping network for sandbox 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036301607Z" level=info msg="NetworkStart: stopping network for sandbox 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036341899Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/05f03049-f5dd-441f-b140-b9466ed1d6a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036380285Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036389121Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036396721Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036452295Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b089846c-7346-4573-9c07-1cd23f7d01d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036476751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036484854Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.036491836Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.038791897Z" level=info msg="NetworkStart: stopping network for sandbox bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.038926090Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/b19c813a-048d-4784-bd23-283a4c6c35b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.038948117Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.038954908Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.038961116Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.039126840Z" level=info msg="NetworkStart: stopping network for sandbox 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.039239659Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/547b0bcd-2d82-44ed-9e60-ae066dbfc0bb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.039261781Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.039268158Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:42.039274881Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:43.020669036Z" level=info msg="NetworkStart: stopping network for sandbox 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:43.020806238Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/93888ead-3f31-48de-9058-d2ed06229407 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:43.020827924Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:43.020834408Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:43.020839976Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:46.019123813Z" level=info msg="NetworkStart: stopping network for sandbox 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:46.019354526Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ad7803d6-c9a1-4ad5-8454-cb2ed68d79e9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:46.019379640Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:46.019386244Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:46.019392345Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:48.021317514Z" level=info msg="NetworkStart: stopping network for sandbox 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:48.021499619Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1f8db7b7-b10c-4b31-879f-8236045d7b10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:48.021536177Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:48.021545303Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:48.021554089Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:31:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:31:48.997056 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:31:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:31:48.997629 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:31:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:31:58.142362195Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:32:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:02.996045 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.996995554Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=fa79b03f-84ae-4bc3-b6b0-dc755d9950c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.997159603Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fa79b03f-84ae-4bc3-b6b0-dc755d9950c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.997786715Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=24da9b64-8cf5-4f1f-aa9f-84b3f529c40f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.997923587Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=24da9b64-8cf5-4f1f-aa9f-84b3f529c40f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.998970030Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=b3681ba3-2044-4d8e-88ca-66410232a70c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:32:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:02.999061336Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope. -- Subject: Unit crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5. -- Subject: Unit crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.117948719Z" level=info msg="Created container 42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=b3681ba3-2044-4d8e-88ca-66410232a70c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.118515975Z" level=info msg="Starting container: 42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" id=0420f5f7-ada5-43b3-a1f6-8c809627b29c name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.138122182Z" level=info msg="Started container" PID=44567 containerID=42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=0420f5f7-ada5-43b3-a1f6-8c809627b29c name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.142820090Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.153257614Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.153275558Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.153302968Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.161778448Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.161793869Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.161802290Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.170134312Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.170149511Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.170160947Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.178627248Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.178641972Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.178650430Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.186581614Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:32:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:03.186598278Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:32:03 hub-master-0.workload.bos2.lab conmon[44545]: conmon 42e86a61d7d742f8acbd : container 44567 exited with status 1 Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has successfully entered the 'dead' state. Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope: Consumed 558ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope completed and consumed the indicated resources. Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope has successfully entered the 'dead' state. Jan 23 16:32:03 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope: Consumed 58ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5.scope completed and consumed the indicated resources. Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.024516 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/182.log" Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.025141 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/181.log" Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.026248 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" exitCode=1 Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.026271 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5} Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.026288 8631 scope.go:115] "RemoveContainer" containerID="8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:04.027145 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:04.027098759Z" level=info msg="Removing container: 8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6" id=4b8aa653-deb0-49af-9fee-663722555f36 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:04.027705 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:32:04 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-d95163a6b6b9292810ab3878c61a81000c4bcbfe19781a5559bc36a66721ca74-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-d95163a6b6b9292810ab3878c61a81000c4bcbfe19781a5559bc36a66721ca74-merged.mount has successfully entered the 'dead' state. Jan 23 16:32:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:04.067852611Z" level=info msg="Removed container 8005f7d165268f47b1ba3210f1a9599d0551b06b645ebe9d6a6aa65f184ad8c6: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=4b8aa653-deb0-49af-9fee-663722555f36 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:32:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:05.029411 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/182.log" Jan 23 16:32:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:05.667482 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:32:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:05.668358 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:05.668856 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.036308346Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.036349022Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount has successfully entered the 'dead' state. Jan 23 16:32:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount has successfully entered the 'dead' state. Jan 23 16:32:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-44886cee\x2d8ecf\x2d419f\x2d8639\x2d808a8013d773.mount has successfully entered the 'dead' state. Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.093340374Z" level=info msg="runSandbox: deleting pod ID b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e from idIndex" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.093365902Z" level=info msg="runSandbox: removing pod sandbox b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.093384280Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.093396077Z" level=info msg="runSandbox: unmounting shmPath for sandbox b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.113447952Z" level=info msg="runSandbox: removing pod sandbox from storage: b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.117296824Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:15.117314636Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=951511cc-67aa-4835-931e-d0aec2ee1e41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:15.117498 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:15.117658 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:15.117682 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:15.117733 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b9ea3b94a223bfd4ad5f530b880e03efeb935874312ef6eb8a171873bdd3444e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:32:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:20.996596 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:20.997127 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.938950682Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.938988953Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.938985346Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.939091707Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.939559899Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.939589268Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.942636797Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.942676045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.942809233Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.942839121Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e0e15bec\x2dcf51\x2d493b\x2d8f7b\x2d0fd01d542a7a.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fbebf627\x2db668\x2d44a6\x2da850\x2d2f67ec07cd20.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bff6e9ce\x2d75b4\x2d4fd8\x2d8407\x2ddd816c200f4d.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d449d63a\x2d6993\x2d4f95\x2d8bba\x2da912a7050f9e.mount has successfully entered the 'dead' state. Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984356469Z" level=info msg="runSandbox: deleting pod ID 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb from idIndex" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984375403Z" level=info msg="runSandbox: deleting pod ID 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc from idIndex" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984402964Z" level=info msg="runSandbox: removing pod sandbox 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984418276Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984438868Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984377328Z" level=info msg="runSandbox: deleting pod ID f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9 from idIndex" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984489038Z" level=info msg="runSandbox: removing pod sandbox f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984502375Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984514252Z" level=info msg="runSandbox: unmounting shmPath for sandbox f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984378217Z" level=info msg="runSandbox: deleting pod ID 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf from idIndex" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984674430Z" level=info msg="runSandbox: removing pod sandbox 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984688655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984701753Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984385511Z" level=info msg="runSandbox: removing pod sandbox 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984732871Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984752047Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984386682Z" level=info msg="runSandbox: deleting pod ID 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115 from idIndex" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984819654Z" level=info msg="runSandbox: removing pod sandbox 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984832202Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:22.984844390Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.004560596Z" level=info msg="runSandbox: removing pod sandbox from storage: 9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.004578219Z" level=info msg="runSandbox: removing pod sandbox from storage: 1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.004585226Z" level=info msg="runSandbox: removing pod sandbox from storage: 89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.004619125Z" level=info msg="runSandbox: removing pod sandbox from storage: 9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.004561727Z" level=info msg="runSandbox: removing pod sandbox from storage: f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.012734773Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.012757901Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ce1fefdd-df27-46ab-a7b0-688e00532c9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.013008 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.013053 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.013076 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.013121 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.015890685Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.015909531Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ad411d52-d568-4885-abd0-74dba1dcaedd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.016116 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.016149 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.016170 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.016208 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.018829870Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.018847161Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=71c30a9a-64aa-4eec-adfa-c28138bafe0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.019055 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.019087 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.019108 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.019147 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.021801385Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.021818399Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4c979c47-bbcf-4e7b-bd17-78bc0f65e097 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.022048 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.022092 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.022114 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.022162 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.024672907Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.024688380Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d225b0c4-3de4-4c39-9d8a-4b44a45fc59c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.024785 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.024817 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.024838 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.024876 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.029074281Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.029101727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:23.062679 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:23.062758 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:23.062884 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:23.062960 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:23.063008 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063066621Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063096217Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063196081Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063230208Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063315620Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063325447Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063361145Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063368016Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063385050Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.063333104Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.084278067Z" level=info msg="runSandbox: deleting pod ID e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa from idIndex" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.084303040Z" level=info msg="runSandbox: removing pod sandbox e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.084316464Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.084327439Z" level=info msg="runSandbox: unmounting shmPath for sandbox e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.089716361Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6588b3c-0d77-4cdd-9d3d-81a2fea6bb99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.089740592Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.090296611Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c8ca34c9-e51e-45d7-b982-6c38fadfe413 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.090320953Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.093833406Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c2706b0d-069e-4e3f-9e4e-13ec4d20eafa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.093855471Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.094368358Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ae6ba663-6ba7-4afd-9cc0-bf7cb3e177b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.094389118Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.095689798Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/152546f6-0a7f-4238-b167-e05a00e7c471 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.095708517Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.108445936Z" level=info msg="runSandbox: removing pod sandbox from storage: e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.114805352Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:23.114833117Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=6a8805ff-8889-4985-8516-25e57ca86366 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.115332 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.115380 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.115407 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:23.115452 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-727b6877\x2dc6ec\x2d4dd8\x2d84fb\x2dfc9337fbfdeb.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e4c92d5689510136e00ce16f99a1ca4111677477b97e3d620534df87fa2a8faa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e05aaecd\x2db58f\x2d4f63\x2db920\x2d51ea670d2bc0.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1a2fffaa21ce1690acfb06db82902582b89d2e907587d030b1619d03d9f34ebf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9fe34b1846a4a5f19dc382caeb076792cc8b360d8190b0f0485b449079fdd5bb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f6322427a9740446dfcd91f8028c8dfc745a1e2b27f292a4046f46fa031b02c9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9bf892187f34b7b48e0d51466270623ee7ce75658e405677bd88b3ca63357ebc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-89aa572c5c115f89ed05041404f27915c140b77e8b6e4c1b32b41dc9ab2ff115-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.031164997Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.031436114Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount has successfully entered the 'dead' state. Jan 23 16:32:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount has successfully entered the 'dead' state. Jan 23 16:32:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-66909d9a\x2dbdd0\x2d442c\x2d900f\x2d99dff72bda0b.mount has successfully entered the 'dead' state. Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.074306324Z" level=info msg="runSandbox: deleting pod ID 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7 from idIndex" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.074332888Z" level=info msg="runSandbox: removing pod sandbox 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.074346823Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.074361342Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.094468807Z" level=info msg="runSandbox: removing pod sandbox from storage: 1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.098089135Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:24.098107941Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=14cc62be-81c8-4553-9e3f-21217eebf8e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:24.098356 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:24.098400 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:32:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:24.098423 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:32:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:24.098469 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(1c263eb5153f9a74dc500d3c89cf0da485fe93f3a30f0511f40d4a73d68315e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:32:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:25.996144 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:25.996537704Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:25.996590990Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.008398106Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/29ad44e1-9d1d-44d6-b051-9f26963a7d1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.008423428Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.036836679Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.036877162Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.037764366Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.037801752Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount has successfully entered the 'dead' state. Jan 23 16:32:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount has successfully entered the 'dead' state. Jan 23 16:32:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount has successfully entered the 'dead' state. Jan 23 16:32:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount has successfully entered the 'dead' state. Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075313349Z" level=info msg="runSandbox: deleting pod ID 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0 from idIndex" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075343065Z" level=info msg="runSandbox: removing pod sandbox 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075360494Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075375143Z" level=info msg="runSandbox: unmounting shmPath for sandbox 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075319606Z" level=info msg="runSandbox: deleting pod ID a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841 from idIndex" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075432534Z" level=info msg="runSandbox: removing pod sandbox a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075447865Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.075461182Z" level=info msg="runSandbox: unmounting shmPath for sandbox a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.091443603Z" level=info msg="runSandbox: removing pod sandbox from storage: 429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.091462247Z" level=info msg="runSandbox: removing pod sandbox from storage: a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.094167619Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.094187569Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=cb10530f-83d9-45f6-b6b4-5e0ebf9dc659 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.094384 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.094426 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.094448 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.094500 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.097215504Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:26.097233704Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=caa9474c-7fa0-4ca5-a2c6-8da7adda45fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.097457 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.097498 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.097523 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:26.097570 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c3292b6b\x2d06ab\x2d4e84\x2d9757\x2df1afebe5f003.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-67803115\x2d8222\x2d4c37\x2da880\x2d8157e7e489c9.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-429d79bc495766ba731b8b1c27422195c0ada5995e1ebbf1a920399fae4f6bb0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a870eaf2dd90f1254072ce479478112021b16744fb0c920f966a804c4d760841-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.048315123Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.048360488Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.048376235Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.048428357Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.050172831Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.050210882Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.050914085Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.050945133Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount has successfully entered the 'dead' state. Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092304007Z" level=info msg="runSandbox: deleting pod ID 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b from idIndex" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092333475Z" level=info msg="runSandbox: removing pod sandbox 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092350393Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092361541Z" level=info msg="runSandbox: deleting pod ID 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844 from idIndex" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092400525Z" level=info msg="runSandbox: removing pod sandbox 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092416416Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092429622Z" level=info msg="runSandbox: unmounting shmPath for sandbox 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092363833Z" level=info msg="runSandbox: unmounting shmPath for sandbox 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092311653Z" level=info msg="runSandbox: deleting pod ID 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb from idIndex" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092578385Z" level=info msg="runSandbox: removing pod sandbox 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092592372Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.092604777Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.096286230Z" level=info msg="runSandbox: deleting pod ID bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d from idIndex" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.096310170Z" level=info msg="runSandbox: removing pod sandbox bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.096322559Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.096336659Z" level=info msg="runSandbox: unmounting shmPath for sandbox bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.104493595Z" level=info msg="runSandbox: removing pod sandbox from storage: 2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.104509877Z" level=info msg="runSandbox: removing pod sandbox from storage: 605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.104503391Z" level=info msg="runSandbox: removing pod sandbox from storage: 361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.107795809Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.107813423Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=68716334-5d83-4154-9035-616745afd567 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.108071 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.108116 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.108137 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.108188 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.108435944Z" level=info msg="runSandbox: removing pod sandbox from storage: bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.110766786Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.110785583Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=d32e52a6-4a7f-4058-8619-0af0d5789298 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.111038 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.111081 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.111105 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.111154 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.113719457Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.113738606Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e509f823-63de-4e95-971b-e3fb48d1f353 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.113974 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.114008 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.114031 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.114072 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.116661487Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:27.116678846Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=448dea8b-26ba-4f8d-a5db-898ee17f6caa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.116851 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.116884 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.116908 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:27.116946 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862648 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862668 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862674 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862681 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862687 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862693 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:27.862699 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-547b0bcd\x2d2d82\x2d44ed\x2d9e60\x2dae066dbfc0bb.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b19c813a\x2d048d\x2d4784\x2dbd23\x2d283a4c6c35b2.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-05f03049\x2df5dd\x2d441f\x2db140\x2db9466ed1d6a0.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b089846c\x2d7346\x2d4573\x2d9c07\x2d1cd23f7d01d1.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-605375de613a1d801e08ed723032289c10e35833c7569fb1840c6ffe74afd86b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2de61371bfa7a4ab757b18a0d5849b90147cc5142987e9776c917229517deccb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bdc67a1bcf46230861dd984cd5c2224c7b08081e4ccf2d7bfb0cb478eed7351d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-361dd9738d3555f92ef72f99b0c689a988907ff5280c06ceb711c87b4907a844-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.031166508Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.031201807Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-93888ead\x2d3f31\x2d48de\x2d9058\x2dd2ed06229407.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.075304873Z" level=info msg="runSandbox: deleting pod ID 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb from idIndex" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.075329668Z" level=info msg="runSandbox: removing pod sandbox 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.075346307Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.075360356Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.087451714Z" level=info msg="runSandbox: removing pod sandbox from storage: 7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.090711727Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.090729332Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=628b051f-3ddb-41e4-8c8d-540cb48f6d4d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:28.090936 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:28.091062 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:32:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:28.091089 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:32:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:28.091133 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(7d20ea1ac81e50e27ff05ceec825385a8ab4a9bf7498f81c2aa73b3279069abb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:28.142694661Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.029455420Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.029493763Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount has successfully entered the 'dead' state. Jan 23 16:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount has successfully entered the 'dead' state. Jan 23 16:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ad7803d6\x2dc9a1\x2d4ad5\x2d8454\x2dcb2ed68d79e9.mount has successfully entered the 'dead' state. Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.074305670Z" level=info msg="runSandbox: deleting pod ID 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c from idIndex" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.074334527Z" level=info msg="runSandbox: removing pod sandbox 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.074350010Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.074365115Z" level=info msg="runSandbox: unmounting shmPath for sandbox 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.087432903Z" level=info msg="runSandbox: removing pod sandbox from storage: 97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.090826663Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:31.090844459Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=042d2c78-2e94-47c1-973b-86db22712fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:31.091054 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:31.091093 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:31.091114 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:31.091161 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-97188fce82580d00e73247a4e9eecda0523e9ddb1760fe4a13e4e0dc875d6f1c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.031550676Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.031594495Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount has successfully entered the 'dead' state. Jan 23 16:32:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount has successfully entered the 'dead' state. Jan 23 16:32:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1f8db7b7\x2db10c\x2d4b31\x2d879f\x2d8236045d7b10.mount has successfully entered the 'dead' state. Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.085303892Z" level=info msg="runSandbox: deleting pod ID 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459 from idIndex" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.085327464Z" level=info msg="runSandbox: removing pod sandbox 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.085341990Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.085355064Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.097396482Z" level=info msg="runSandbox: removing pod sandbox from storage: 41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.100815157Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:33.100831854Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=324b04b4-6e22-4677-987c-b4c7c6226f55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:33.101028 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:32:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:33.101074 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:33.101098 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:33.101146 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(41e1b4a401f77cdbfab764085d5c53862fdf7c96a4931863c67168b7eb21f459): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:32:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:35.996151 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:35.996691 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:37.996998 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:37.997322 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:37.997373 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997426588Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997627740Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997815733Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997838614Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997884229Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:37.997851068Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.019479153Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b81b60f4-d903-4246-ba0c-94aac4212034 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.019499574Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.020382444Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/9d478f56-abe8-4628-942a-723da0d5c902 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.020401661Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.021248988Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/da9fa52c-bbb4-46c5-bb4c-3b974a100990 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.021269627Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:38.996246 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.996559323Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:38.996607363Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:39.007441289Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2a58ad02-3006-4fdb-87f5-b6156c0743d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:39.007462934Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:39.996440 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:39.996758539Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:39.996808484Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.011295892Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/6c2624a7-35de-4e90-93d7-8faed45ffdad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.011332721Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:40.996069 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:40.996244 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:40.996506 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996541852Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996583059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996615707Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996652545Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996761299Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:40.996804375Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.016867529Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/523c02ae-59f5-45bb-90cb-2501bda5232d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.016888740Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.019134005Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/34f9de6c-2994-4314-ab4b-4eb1c1bd9e0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.019155607Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.019748990Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/98ab6e4d-7bef-4e76-9c05-caad35316b74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.019769536Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:41.996194 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.996628697Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:41.996685899Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:42.008541801Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/23aeabb9-f2d6-45f8-9830-f6ed9e137f63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:42.008564324Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:42.995868 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:42.996288255Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:42.996323581Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:43.007202082Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/83ec206b-593a-496e-b62a-b49565d6b473 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:43.007228790Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:45.995624 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:32:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:45.996169572Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:32:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:45.996234828Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:32:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:46.008916753Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4e394ac1-3871-4189-9e59-4d0d1ca97b41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:32:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:46.008940555Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:32:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:47.997226 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:47.997716 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:32:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:32:58.147201695Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:32:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:32:59.997181 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:32:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:32:59.997689 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.103731601Z" level=info msg="NetworkStart: stopping network for sandbox e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.103915111Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c8ca34c9-e51e-45d7-b982-6c38fadfe413 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.103938936Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.103945784Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.103951936Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.104907072Z" level=info msg="NetworkStart: stopping network for sandbox 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.104994220Z" level=info msg="NetworkStart: stopping network for sandbox 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105063486Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c2706b0d-069e-4e3f-9e4e-13ec4d20eafa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105091842Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105099709Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105106878Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105121419Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6588b3c-0d77-4cdd-9d3d-81a2fea6bb99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105143536Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105151903Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.105158564Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.106606857Z" level=info msg="NetworkStart: stopping network for sandbox b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.106714924Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ae6ba663-6ba7-4afd-9cc0-bf7cb3e177b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.106737915Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.106745062Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.106751830Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.107818475Z" level=info msg="NetworkStart: stopping network for sandbox 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.107924492Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/152546f6-0a7f-4238-b167-e05a00e7c471 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.107943132Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.107949466Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:08.107955082Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1184] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1189] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1190] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1402] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1403] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1415] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1418] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1418] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1420] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1423] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491588.1427] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:33:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491589.7792] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:33:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:11.021196247Z" level=info msg="NetworkStart: stopping network for sandbox 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:11.021347894Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/29ad44e1-9d1d-44d6-b051-9f26963a7d1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:11.021371168Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:11.021380802Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:11.021387588Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:11.996996 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:33:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:11.997765 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.032960290Z" level=info msg="NetworkStart: stopping network for sandbox 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033008253Z" level=info msg="NetworkStart: stopping network for sandbox 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033346589Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b81b60f4-d903-4246-ba0c-94aac4212034 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033371645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033379591Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033387410Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033463791Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/9d478f56-abe8-4628-942a-723da0d5c902 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033493026Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033501949Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.033509141Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.034721336Z" level=info msg="NetworkStart: stopping network for sandbox 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.034837496Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/da9fa52c-bbb4-46c5-bb4c-3b974a100990 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.034858511Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.034865677Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:23.034872979Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:24.020733334Z" level=info msg="NetworkStart: stopping network for sandbox de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:24.020886035Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2a58ad02-3006-4fdb-87f5-b6156c0743d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:24.020911914Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:24.020918980Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:24.020925021Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:25.024845636Z" level=info msg="NetworkStart: stopping network for sandbox d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:25.024991683Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/6c2624a7-35de-4e90-93d7-8faed45ffdad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:25.025014745Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:25.025021885Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:25.025028234Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:25.996443 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:25.996998 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.029017196Z" level=info msg="NetworkStart: stopping network for sandbox 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.029185249Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/523c02ae-59f5-45bb-90cb-2501bda5232d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.029219969Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.029227250Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.029233364Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.031829636Z" level=info msg="NetworkStart: stopping network for sandbox 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.031942787Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/34f9de6c-2994-4314-ab4b-4eb1c1bd9e0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.031965699Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.031972560Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.031978535Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.033923004Z" level=info msg="NetworkStart: stopping network for sandbox 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.034058210Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/98ab6e4d-7bef-4e76-9c05-caad35316b74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.034080056Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.034088045Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:26.034094938Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:27.021388871Z" level=info msg="NetworkStart: stopping network for sandbox 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:27.021515262Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/23aeabb9-f2d6-45f8-9830-f6ed9e137f63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:27.021538701Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:27.021547162Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:27.021553101Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863329 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863348 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863356 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863361 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863368 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863374 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:27.863381 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.019721913Z" level=info msg="NetworkStart: stopping network for sandbox 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.019925784Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/83ec206b-593a-496e-b62a-b49565d6b473 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.019952101Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.019958717Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.019965304Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:28.143822909Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:33:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:31.023588048Z" level=info msg="NetworkStart: stopping network for sandbox 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:31.023748578Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4e394ac1-3871-4189-9e59-4d0d1ca97b41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:31.023770428Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:33:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:31.023776408Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:33:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:31.023782908Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:38.996916 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:33:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:38.997552 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.115956270Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.116216794Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.116568226Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.116600034Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.116584227Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.116683865Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.117781010Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.117809480Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.118638748Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.118681204Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-152546f6\x2d0a7f\x2d4238\x2db167\x2de05a00e7c471.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ae6ba663\x2d6ba7\x2d4afd\x2d9cc0\x2dbf7cb3e177b6.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c8ca34c9\x2de51e\x2d45d7\x2db982\x2d6c38fadfe413.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6588b3c\x2d0d77\x2d4cdd\x2d9d3d\x2d81a2fea6bb99.mount has successfully entered the 'dead' state. Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162345677Z" level=info msg="runSandbox: deleting pod ID e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8 from idIndex" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162377338Z" level=info msg="runSandbox: removing pod sandbox e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162391557Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162415532Z" level=info msg="runSandbox: unmounting shmPath for sandbox e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162346815Z" level=info msg="runSandbox: deleting pod ID 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9 from idIndex" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162471718Z" level=info msg="runSandbox: removing pod sandbox 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162482832Z" level=info msg="runSandbox: deleting pod ID 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c from idIndex" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162516878Z" level=info msg="runSandbox: removing pod sandbox 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162533192Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162489493Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162579196Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.162556981Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.163272606Z" level=info msg="runSandbox: deleting pod ID b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42 from idIndex" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.163296041Z" level=info msg="runSandbox: removing pod sandbox b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.163308314Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.163324049Z" level=info msg="runSandbox: unmounting shmPath for sandbox b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.166272514Z" level=info msg="runSandbox: deleting pod ID 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006 from idIndex" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.166295578Z" level=info msg="runSandbox: removing pod sandbox 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.166306824Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.166318176Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.170472937Z" level=info msg="runSandbox: removing pod sandbox from storage: 7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.173295097Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.173316997Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=8273431a-49a2-4d3b-8db7-baed45927ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.173527 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.173573 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.173596 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.173652 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.174479844Z" level=info msg="runSandbox: removing pod sandbox from storage: e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.174510333Z" level=info msg="runSandbox: removing pod sandbox from storage: 4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.177525155Z" level=info msg="runSandbox: removing pod sandbox from storage: b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.181324305Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.181345448Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=36edda8f-ecb6-433a-adf5-6590e9074095 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.181613 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.181657 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.181681 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.181730 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.182576131Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.184341622Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.184358432Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=91ecef72-8f20-45ca-9950-bcd86597d2d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.184594 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.184627 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.184648 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.184687 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.187571210Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.187601357Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=81e2846a-9e7d-4f69-856d-16c9962339c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.187831 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.187863 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.187885 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.187923 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.190512105Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.190530211Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=36e59ba6-f0f7-4c15-bc4e-dae646894a57 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.190734 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.190765 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.190785 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.190819 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.225574 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.225633 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.225704 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.225794 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.225895 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.225920920Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.225952019Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226028419Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226056450Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226157044Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226195638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226223829Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226255165Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226166132Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.226345741Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.254272650Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2da71eae-3c8b-4424-a95b-0a3e772aa3f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.254293105Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.255032700Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/aa50bc57-0ca2-452e-93de-c1216a41ba35 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.255052913Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.256848449Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7efe3870-9c76-4091-8251-9f143b3e7ce0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.256866944Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.258877979Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ed076e5d-f272-45be-97b0-1fd468ee6746 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.258897202Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.259551672Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/52848842-b00e-49c2-8e9b-68b6fec782fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:33:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:53.259570284Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:33:53.996396 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:33:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:53.996905 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c2706b0d\x2d069e\x2d4e3f\x2d9e4e\x2d13ec4d20eafa.mount has successfully entered the 'dead' state. Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4e0df2d4466eb2dd5cfd3b6b9e91139825d330e8c170bea10022a5e5322edc7c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e8ae6ba157315109434594c77de130d0b2377aa500c24d5bea11e4b5602008a8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ab013ec06400b56087df2e84d63df2d45f2a5af6435e02db6c87a83d05c8006-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b118e8099d364689722390d7546368f0e4df74298afc8c9659c9acafd4a17d42-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7011164b53b5b04a8cabe68b2e05af48b40905950b1b7bd795cbe930d2a807c9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.031402654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.031438301Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount has successfully entered the 'dead' state. Jan 23 16:33:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount has successfully entered the 'dead' state. Jan 23 16:33:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-29ad44e1\x2d9d1d\x2d44d6\x2db051\x2d9f26963a7d1b.mount has successfully entered the 'dead' state. Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.079314235Z" level=info msg="runSandbox: deleting pod ID 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a from idIndex" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.079339508Z" level=info msg="runSandbox: removing pod sandbox 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.079352888Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.079365930Z" level=info msg="runSandbox: unmounting shmPath for sandbox 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.096462796Z" level=info msg="runSandbox: removing pod sandbox from storage: 78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.099224783Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:56.099243588Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=d2675e3b-aa91-4a5e-b6a3-d674fe53bf2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:33:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:56.099475 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:33:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:56.099640 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:33:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:56.099663 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:33:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:33:56.099712 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(78825179235ec250004617d709e92324fd727c5bf0c95f7d26539491bfe9151a): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:33:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:33:58.146288179Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:34:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:06.996538 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:34:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:06.997055 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:34:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:07.997041 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:07.997496249Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:07.997707418Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.010545171Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/084cd571-0f3c-4deb-8a0d-6373ea78f6ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.010567893Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.043953204Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.043991420Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.044569445Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.044601899Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.044712016Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.044744321Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount has successfully entered the 'dead' state. Jan 23 16:34:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount has successfully entered the 'dead' state. Jan 23 16:34:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount has successfully entered the 'dead' state. Jan 23 16:34:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount has successfully entered the 'dead' state. Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.090309389Z" level=info msg="runSandbox: deleting pod ID 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac from idIndex" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.090333841Z" level=info msg="runSandbox: removing pod sandbox 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.090347026Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.090357709Z" level=info msg="runSandbox: unmounting shmPath for sandbox 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.091318002Z" level=info msg="runSandbox: deleting pod ID 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f from idIndex" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.091342537Z" level=info msg="runSandbox: removing pod sandbox 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.091357701Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.091371695Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.099303764Z" level=info msg="runSandbox: deleting pod ID 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8 from idIndex" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.099326829Z" level=info msg="runSandbox: removing pod sandbox 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.099340133Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.099352666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.106446539Z" level=info msg="runSandbox: removing pod sandbox from storage: 5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.108444096Z" level=info msg="runSandbox: removing pod sandbox from storage: 363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.109242879Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.109262335Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e4e09e5c-646d-4426-a4c0-919c84652102 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.109464 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.109503 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.109527 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.109574 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.111505219Z" level=info msg="runSandbox: removing pod sandbox from storage: 6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.112705909Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.112725497Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8da701f0-c8de-434b-9031-cce9ffce4207 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.112961 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.112993 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.113015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.113052 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.115754633Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:08.115772444Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=88cd4d4c-1266-414a-ba7b-9ee7967b73fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.115872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.115905 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.115924 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:34:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:08.115967 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-da9fa52c\x2dbbb4\x2d46c5\x2dbb4c\x2d3b974a100990.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9d478f56\x2dabe8\x2d4628\x2d942a\x2d723da0d5c902.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b81b60f4\x2dd903\x2d4246\x2dba0c\x2d94aac4212034.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6239ec855a85d7454788c0127a22574d69d1014f36c91e3024f92900324c21c8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5042b03ff6b868acdf6fca719986288455c2da6ae458e30a3b2e00e51219f45f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-363424624f9425fc0971b41703286a1965b07ffb256564da8bce2dcaa2a4abac-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.032314841Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.032364895Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a58ad02\x2d3006\x2d4fdb\x2d87f5\x2db6156c0743d0.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.090317614Z" level=info msg="runSandbox: deleting pod ID de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba from idIndex" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.090346897Z" level=info msg="runSandbox: removing pod sandbox de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.090365538Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.090379288Z" level=info msg="runSandbox: unmounting shmPath for sandbox de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.110418035Z" level=info msg="runSandbox: removing pod sandbox from storage: de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.114007896Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:09.114026960Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=80df7362-07b6-401c-9ea1-322a69178918 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:09.114305 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:09.114353 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:09.114375 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:09.114422 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(de7b1058d1b0ff8245c8e6974da84133379a0765f0f270f9dfd3495f55562bba): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.036930004Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.036967408Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount has successfully entered the 'dead' state. Jan 23 16:34:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount has successfully entered the 'dead' state. Jan 23 16:34:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6c2624a7\x2d35de\x2d4e90\x2d93d7\x2d8faed45ffdad.mount has successfully entered the 'dead' state. Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.092398163Z" level=info msg="runSandbox: deleting pod ID d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60 from idIndex" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.092424532Z" level=info msg="runSandbox: removing pod sandbox d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.092438499Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.092451626Z" level=info msg="runSandbox: unmounting shmPath for sandbox d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.104438216Z" level=info msg="runSandbox: removing pod sandbox from storage: d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.107686555Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:10.107704773Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c7bb4961-5ed2-4d34-9ae3-29027aa04b40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:10.107900 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:10.107942 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:10.107967 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:10.108014 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d1969cef107460034eb6ebf0f4703dcb6bd2ae5022720fbe3bba4adcb39d4e60): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.039475367Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.039515450Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.042781554Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.042813919Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.044730080Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.044763284Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-523c02ae\x2d59f5\x2d45bb\x2d90cb\x2d2501bda5232d.mount has successfully entered the 'dead' state. Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.083280135Z" level=info msg="runSandbox: deleting pod ID 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea from idIndex" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.083310933Z" level=info msg="runSandbox: removing pod sandbox 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.083328109Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.083342606Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087327065Z" level=info msg="runSandbox: deleting pod ID 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a from idIndex" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087359467Z" level=info msg="runSandbox: removing pod sandbox 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087374129Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087386769Z" level=info msg="runSandbox: unmounting shmPath for sandbox 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087329810Z" level=info msg="runSandbox: deleting pod ID 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761 from idIndex" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087453003Z" level=info msg="runSandbox: removing pod sandbox 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087469271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.087484635Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.095439487Z" level=info msg="runSandbox: removing pod sandbox from storage: 9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.098803649Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.098821979Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0b0e3153-c9ce-41e8-81a0-5d85e8746b5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.099082 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.099131 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.099154 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.099202 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.100446729Z" level=info msg="runSandbox: removing pod sandbox from storage: 601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.100492070Z" level=info msg="runSandbox: removing pod sandbox from storage: 2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.103630003Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.103651991Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f0c87ff5-20b1-4fdf-a698-596667fd9c49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.103866 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.103906 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.103930 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.103977 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.106865734Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:11.106887159Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=0bad5573-a0fc-4930-b26b-da1424ea32c9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.107083 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.107117 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.107139 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:34:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:11.107179 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.031876991Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.031913121Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-98ab6e4d\x2d7bef\x2d4e76\x2d9c05\x2dcaad35316b74.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-34f9de6c\x2d2994\x2d4314\x2dab4b\x2d4eb1c1bd9e0f.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2daa5fcd69d1e43c12f65be3875aff7fb2a8d1c6c8a183c95780da12d90d1761-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-601da1fbba61f9cf0b1b20240d3f374ce32e7c90662682f39d89b72a4593d31a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9473bff93e2d19d48bc7850b1b8a51effc8152591d598321be2b036cbffa06ea-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-23aeabb9\x2df2d6\x2d45f8\x2d9830\x2df6ed9e137f63.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.080310538Z" level=info msg="runSandbox: deleting pod ID 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e from idIndex" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.080333562Z" level=info msg="runSandbox: removing pod sandbox 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.080347250Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.080359488Z" level=info msg="runSandbox: unmounting shmPath for sandbox 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.096429206Z" level=info msg="runSandbox: removing pod sandbox from storage: 234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.099753414Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:12.099773565Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=5c395e0a-e5b0-4112-a14e-146cdc827ddc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:12.099976 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:12.100017 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:12.100042 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:12.100088 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(234879509a72b162dee5f8fd31c4aeaf9fa81d877e611378aadffb0fe9c8263e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.031076068Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.031109040Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount has successfully entered the 'dead' state. Jan 23 16:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount has successfully entered the 'dead' state. Jan 23 16:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-83ec206b\x2d593a\x2d496e\x2db62a\x2db49565d6b473.mount has successfully entered the 'dead' state. Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.072307713Z" level=info msg="runSandbox: deleting pod ID 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5 from idIndex" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.072334942Z" level=info msg="runSandbox: removing pod sandbox 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.072348486Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.072361057Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.084443188Z" level=info msg="runSandbox: removing pod sandbox from storage: 47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.092108116Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:13.092135826Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=aba03f29-b66a-4098-a12a-1ddcf6bada8e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:13.092432 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:13.092484 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:34:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:13.092507 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:34:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:13.092553 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(47b28abd96824db0ab0c654412feaef2809f2921374b76a1386f5899d493acb5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.034114780Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.034154633Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount has successfully entered the 'dead' state. Jan 23 16:34:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount has successfully entered the 'dead' state. Jan 23 16:34:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e394ac1\x2d3871\x2d4189\x2d9e59\x2d4d0d1ca97b41.mount has successfully entered the 'dead' state. Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.080293342Z" level=info msg="runSandbox: deleting pod ID 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0 from idIndex" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.080319772Z" level=info msg="runSandbox: removing pod sandbox 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.080334483Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.080347200Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.096431905Z" level=info msg="runSandbox: removing pod sandbox from storage: 6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.100036094Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:16.100054201Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5ed92317-6527-4337-a0fb-c1710ee623ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:16.100186 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:34:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:16.100358 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:16.100393 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:16.100443 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6edf3d46d3ecc2c0b90ecc22236c1792881a921d86d1a2e50be6217ff3a813f0): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:34:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:19.996400 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:34:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:19.996915 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:20.996028 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:20.996077 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:20.996336795Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:20.996386213Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:20.996413794Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:20.996465229Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.013695225Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/6bb1f3ea-62c6-4f7d-8a88-2a1a5069e18f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.013719981Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.013873971Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/39b42c95-4607-4324-974f-c48157a808e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.013893248Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:21.996402 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.996817379Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:21.996872249Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.007862133Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a59df80d-a341-4243-b71b-ee02b6f3888b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.007884237Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:22.995997 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:22.996111 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:22.996189 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996435440Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996489765Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996499435Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996537873Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996584442Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:22.996607819Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.015324763Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b5913bef-d96d-41cb-b5bf-f16617a2e279 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.015346383Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.017931516Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/33903d55-df43-40e7-9f18-d24442c6641f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.017951170Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.018713500Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/2de6ca78-bf5f-4ca3-9777-8917007a9b15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:23.018732996Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:24.996247 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:34:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:24.996378 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:24.996637570Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:24.996677944Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:24.996773112Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:24.996818950Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.016442646Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/483bc055-5fb6-4a68-be11-bbbe541c5554 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.016472922Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.017212220Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/4be78512-e6c4-4476-b734-a6e7502af3f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.017235704Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:25.995785 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.996153737Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:25.996202734Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:26.007506195Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/5a629cac-0d41-4035-9349-db3b89ac7ad8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:26.007534112Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:26.996161 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:26.996557104Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:26.996607277Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:27.007122822Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/d937d9bd-e136-47d2-b3be-a926d8835628 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:27.007146462Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863882 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863900 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863906 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863912 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863918 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863924 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:27.863931 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:34:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:28.142271815Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:34:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:29.995873 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:34:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:29.996208945Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:29.996255348Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:34:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:30.007808164Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/99f92e74-f60e-4553-bbf0-cf06e89dfbcc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:30.007831385Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:30.996776 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:34:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:30.997322 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:34:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491678.1208] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:34:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491678.1214] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:34:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491678.1215] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:34:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491678.1512] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:34:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491678.1513] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268024192Z" level=info msg="NetworkStart: stopping network for sandbox 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268201167Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2da71eae-3c8b-4424-a95b-0a3e772aa3f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268230073Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268236881Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268243168Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268568554Z" level=info msg="NetworkStart: stopping network for sandbox 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268712098Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7efe3870-9c76-4091-8251-9f143b3e7ce0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268740110Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268747430Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268754330Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268809360Z" level=info msg="NetworkStart: stopping network for sandbox 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268934445Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/aa50bc57-0ca2-452e-93de-c1216a41ba35 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268958104Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268966745Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.268975172Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271100844Z" level=info msg="NetworkStart: stopping network for sandbox 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271238827Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ed076e5d-f272-45be-97b0-1fd468ee6746 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271263954Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271272533Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271280029Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271707073Z" level=info msg="NetworkStart: stopping network for sandbox 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271843330Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/52848842-b00e-49c2-8e9b-68b6fec782fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271865874Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271872835Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:38.271878720Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:41.996580 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:34:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:41.997260 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:34:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:53.024077409Z" level=info msg="NetworkStart: stopping network for sandbox 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:34:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:53.024264162Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/084cd571-0f3c-4deb-8a0d-6373ea78f6ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:34:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:53.024292660Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:34:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:53.024302107Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:34:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:53.024309910Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:34:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:34:53.996390 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:34:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:34:53.996901 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:34:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:34:58.143946109Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:35:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:04.996234 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:35:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:04.996861 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028036639Z" level=info msg="NetworkStart: stopping network for sandbox 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028197144Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/6bb1f3ea-62c6-4f7d-8a88-2a1a5069e18f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028253024Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028262048Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028269827Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028633154Z" level=info msg="NetworkStart: stopping network for sandbox 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028735404Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/39b42c95-4607-4324-974f-c48157a808e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028755787Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028763085Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:06.028770139Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:07.021964997Z" level=info msg="NetworkStart: stopping network for sandbox 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:07.022125193Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a59df80d-a341-4243-b71b-ee02b6f3888b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:07.022150238Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:07.022157912Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:07.022164702Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.030291748Z" level=info msg="NetworkStart: stopping network for sandbox 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.030456167Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b5913bef-d96d-41cb-b5bf-f16617a2e279 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.030485564Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.030493361Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.030501320Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.031247091Z" level=info msg="NetworkStart: stopping network for sandbox c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.031359548Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/33903d55-df43-40e7-9f18-d24442c6641f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.031380394Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.031387644Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.031394103Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.032183214Z" level=info msg="NetworkStart: stopping network for sandbox 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.032322183Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/2de6ca78-bf5f-4ca3-9777-8917007a9b15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.032350863Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.032358863Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:08.032366739Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029670630Z" level=info msg="NetworkStart: stopping network for sandbox dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029761618Z" level=info msg="NetworkStart: stopping network for sandbox f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029810598Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/483bc055-5fb6-4a68-be11-bbbe541c5554 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029834889Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029841144Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029847089Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029925146Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/4be78512-e6c4-4476-b734-a6e7502af3f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029950850Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029959574Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:10.029967820Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:11.020319824Z" level=info msg="NetworkStart: stopping network for sandbox 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:11.020463095Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/5a629cac-0d41-4035-9349-db3b89ac7ad8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:11.020487834Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:11.020495050Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:11.020501727Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:12.018713411Z" level=info msg="NetworkStart: stopping network for sandbox deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:12.018849514Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/d937d9bd-e136-47d2-b3be-a926d8835628 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:12.018871510Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:12.018878183Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:12.018885332Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:15.021524962Z" level=info msg="NetworkStart: stopping network for sandbox 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:15.021666360Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/99f92e74-f60e-4553-bbf0-cf06e89dfbcc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:15.021714068Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:35:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:15.021720638Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:35:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:15.021727215Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:17.997675 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:35:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:17.998199 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279194468Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279251047Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279719924Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279762948Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279733594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.279840618Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.282601949Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.282633092Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.282768617Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.282797322Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount has successfully entered the 'dead' state. Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335431096Z" level=info msg="runSandbox: deleting pod ID 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920 from idIndex" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335464293Z" level=info msg="runSandbox: removing pod sandbox 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335480147Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335491865Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335436810Z" level=info msg="runSandbox: deleting pod ID 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf from idIndex" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335543232Z" level=info msg="runSandbox: removing pod sandbox 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335555739Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335568464Z" level=info msg="runSandbox: unmounting shmPath for sandbox 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335437175Z" level=info msg="runSandbox: deleting pod ID 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66 from idIndex" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335633373Z" level=info msg="runSandbox: removing pod sandbox 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335650035Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335666351Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335441971Z" level=info msg="runSandbox: deleting pod ID 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192 from idIndex" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335761282Z" level=info msg="runSandbox: removing pod sandbox 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335774970Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.335789069Z" level=info msg="runSandbox: unmounting shmPath for sandbox 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.339286697Z" level=info msg="runSandbox: deleting pod ID 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910 from idIndex" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.339313203Z" level=info msg="runSandbox: removing pod sandbox 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.339326882Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.339342124Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.347474516Z" level=info msg="runSandbox: removing pod sandbox from storage: 94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.347484627Z" level=info msg="runSandbox: removing pod sandbox from storage: 065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.348446345Z" level=info msg="runSandbox: removing pod sandbox from storage: 6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.348496550Z" level=info msg="runSandbox: removing pod sandbox from storage: 6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.350698020Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.350719734Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=81cdb028-b161-4fd7-aaec-4417d0a8d883 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.350969 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.351011 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.351034 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.351077 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.353954955Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.353972424Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=182c79e3-62cb-4044-963e-3a0b196b46f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.354084 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.354118 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.354139 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.354177 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.354547558Z" level=info msg="runSandbox: removing pod sandbox from storage: 8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.360370324Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.360398676Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=63fd4008-df5e-43b5-92f7-52a22c2bb061 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.360597 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.360630 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.360651 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.360687 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.364016237Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.364047142Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=bca74957-2094-408d-b78a-08013cc818fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.364339 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.364370 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.364391 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.364425 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.367201463Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.367226383Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=74863360-0745-45a7-a684-87b4fb31d10f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.367428 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.367459 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.367480 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:23.367517 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:23.396396 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:23.396449 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:23.396641 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396703600Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396735065Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:23.396734 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:35:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:23.396802 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396800865Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396829972Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396894107Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.396924590Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.397028760Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.397060531Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.397078763Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.397064088Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.422287787Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3d3eb830-e930-447e-b6b4-ef479b536ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.422470033Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.423257470Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/edf2537c-f21e-4f8c-946f-d97ca8bc5a9a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.423277691Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.424086367Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5dbedc19-92fe-4176-bdc3-933eac19b6be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.424109834Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.427321709Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/62d63642-f9d5-43f1-b3c9-6603a5504dc6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.427344805Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.428135901Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f0ce3220-8135-4b29-be07-aad31d86bc53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:23.428153496Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-52848842\x2db00e\x2d49c2\x2d8e9b\x2d68b6fec782fd.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ed076e5d\x2df272\x2d45be\x2d97b0\x2d1fd468ee6746.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7efe3870\x2d9c76\x2d4091\x2d8251\x2d9f143b3e7ce0.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-aa50bc57\x2d0ca2\x2d452e\x2d93de\x2dc1216a41ba35.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2da71eae\x2d3c8b\x2d4424\x2da95b\x2d0a3e772aa3f6.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-065d9279bec32f19d883953ee8d27a6ac486c2a1d1b07cc09fd0f7233f3511cf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6bec576e2642466c5d34d62a84e608f6b2867bc78ed7804d6420bba835401a66-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6c12d2edb96cc4205124dcd19223bf20bc04e8dc535fbf16348b189d1e792920-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-94171e58fc4e564c7f2a248f61b287a6f9904db6a121989a2c7f159114b91192-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8f59bcd121e0da7ca92d0b978bc1f784964f801becd7ce564d10afe4a1870910-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864094 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864109 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864117 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864123 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864129 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864135 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:27.864144 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:35:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:27.870232934Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=3eaabf91-a332-4a3d-9540-90df7ac89aab name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:27.870363890Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3eaabf91-a332-4a3d-9540-90df7ac89aab name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:28.141731762Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:35:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:28.996556 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:35:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:28.997167 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.035456274Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.035509666Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount has successfully entered the 'dead' state. Jan 23 16:35:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount has successfully entered the 'dead' state. Jan 23 16:35:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-084cd571\x2d0f3c\x2d4deb\x2d8a0d\x2d6373ea78f6ed.mount has successfully entered the 'dead' state. Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.072323454Z" level=info msg="runSandbox: deleting pod ID 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81 from idIndex" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.072353714Z" level=info msg="runSandbox: removing pod sandbox 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.072370482Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.072384781Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.084424095Z" level=info msg="runSandbox: removing pod sandbox from storage: 1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.087419736Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:38.087439747Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e5122b93-93b8-419c-8f77-e2bf35906f8a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:38.087682 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:38.087731 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:38.087756 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:38.087805 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(1890172d08cfb8270a9f6525df07a34b8cd6d5da0c52229a3950d817eaeece81): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:35:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:39.996657 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:35:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:39.997284 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:35:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:49.996412 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:49.996738811Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:49.997000299Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:50.009137557Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e122b44e-0563-4b40-ac2c-bae9e8ffb76a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:35:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:50.009164217Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.038704741Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.038739612Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.039000644Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.039033728Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount has successfully entered the 'dead' state. Jan 23 16:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount has successfully entered the 'dead' state. Jan 23 16:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount has successfully entered the 'dead' state. Jan 23 16:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount has successfully entered the 'dead' state. Jan 23 16:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-39b42c95\x2d4607\x2d4324\x2d974f\x2dc48157a808e8.mount has successfully entered the 'dead' state. Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.091306113Z" level=info msg="runSandbox: deleting pod ID 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec from idIndex" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.091334330Z" level=info msg="runSandbox: removing pod sandbox 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.091349572Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.091362557Z" level=info msg="runSandbox: unmounting shmPath for sandbox 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.098304765Z" level=info msg="runSandbox: deleting pod ID 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d from idIndex" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.098330198Z" level=info msg="runSandbox: removing pod sandbox 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.098342082Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.098353396Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.106459904Z" level=info msg="runSandbox: removing pod sandbox from storage: 532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.109367341Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.109384618Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7591a24b-888f-46c9-bf9b-e8504a277d3b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.109577 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.109623 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.109646 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.109695 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.110434425Z" level=info msg="runSandbox: removing pod sandbox from storage: 4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.113625471Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:51.113643579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=77c0c9fc-9db9-4946-99b9-9b0f4393a9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.113840 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.113883 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.113910 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:51.113963 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.033872440Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.033904754Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6bb1f3ea\x2d62c6\x2d4f7d\x2d8a88\x2d2a1a5069e18f.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-532dfd4397afd61e7e0759ed62b93d6b9e74f53235f582713f1a09526d4482ec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4735bfa9ad05f7a4956a2382c69eeb7ec9bcfef5197c0f3749e252448713ea9d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a59df80d\x2da341\x2d4243\x2db71b\x2dee02b6f3888b.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.082309616Z" level=info msg="runSandbox: deleting pod ID 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421 from idIndex" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.082333434Z" level=info msg="runSandbox: removing pod sandbox 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.082347313Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.082360532Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.105450602Z" level=info msg="runSandbox: removing pod sandbox from storage: 89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.108880832Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:52.108899314Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=46cd8774-bd47-4122-8e35-1567245c22e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:52.109109 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:52.109153 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:35:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:52.109175 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:35:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:52.109232 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(89501547bc4530a65bfa8810b0e63da05e2b0f6b13f47271b9af9d39b8107421): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.041606963Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.041651787Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.041885239Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.041917232Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.042852842Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.042880649Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2de6ca78\x2dbf5f\x2d4ca3\x2d9777\x2d8917007a9b15.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b5913bef\x2dd96d\x2d41cb\x2db5bf\x2df16617a2e279.mount has successfully entered the 'dead' state. Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100296786Z" level=info msg="runSandbox: deleting pod ID 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7 from idIndex" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100325947Z" level=info msg="runSandbox: removing pod sandbox 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100341657Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100358738Z" level=info msg="runSandbox: unmounting shmPath for sandbox 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100298618Z" level=info msg="runSandbox: deleting pod ID c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554 from idIndex" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100416494Z" level=info msg="runSandbox: removing pod sandbox c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100430629Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100443990Z" level=info msg="runSandbox: unmounting shmPath for sandbox c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100414464Z" level=info msg="runSandbox: deleting pod ID 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa from idIndex" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100510811Z" level=info msg="runSandbox: removing pod sandbox 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100525462Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.100538230Z" level=info msg="runSandbox: unmounting shmPath for sandbox 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.117456494Z" level=info msg="runSandbox: removing pod sandbox from storage: 91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.121569563Z" level=info msg="runSandbox: removing pod sandbox from storage: c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.121629302Z" level=info msg="runSandbox: removing pod sandbox from storage: 859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.122971199Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.122992752Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=d2998e6d-a163-4f78-a4c7-8683c8e83d14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.123163 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.123227 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.123253 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.123306 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.126328344Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.126344781Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0fda81b3-f2cf-4d35-9702-9e7909ea4803 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.126502 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.126533 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.126554 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.126589 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.129292595Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:53.129309456Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2f95774e-49fc-4382-b098-7e614ff707de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.129495 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.129528 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.129548 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:53.129588 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:35:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-33903d55\x2ddf43\x2d40e7\x2d9f18\x2dd24442c6641f.mount has successfully entered the 'dead' state. Jan 23 16:35:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-859e6294b0653e09eb5bdbf4c507c9a48e63c4c0887654b5586be9c1311399b7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c2fc7a87b33c21f9e400fb6bb7eae98757b8d369925fbf0285feb76a2deea554-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-91b530cb5f028fe727e26424d9e4f867d648ce70b1a99e690744d176a37680aa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:54.997036 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:35:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:54.997686 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.040142167Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.040182073Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.041476613Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.041520904Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4be78512\x2de6c4\x2d4476\x2db734\x2da6e7502af3f2.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-483bc055\x2d5fb6\x2d4a68\x2dbe11\x2dbbbe541c5554.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.090388757Z" level=info msg="runSandbox: deleting pod ID dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd from idIndex" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.090416982Z" level=info msg="runSandbox: removing pod sandbox dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.090432815Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.090446421Z" level=info msg="runSandbox: unmounting shmPath for sandbox dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.091292453Z" level=info msg="runSandbox: deleting pod ID f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf from idIndex" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.091321483Z" level=info msg="runSandbox: removing pod sandbox f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.091336247Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.091360989Z" level=info msg="runSandbox: unmounting shmPath for sandbox f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.106418754Z" level=info msg="runSandbox: removing pod sandbox from storage: dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.107420764Z" level=info msg="runSandbox: removing pod sandbox from storage: f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.109986899Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.110006038Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=af4d64b2-1d1f-4336-bd74-17497099da10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.110295 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.110340 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.110363 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.110401 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(dbdef11ca90fc09b0ce727f7d5556f4be1d27356585c0e57b72b2f02c7aff3bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.113048554Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:55.113066509Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0fc9a138-ea8d-4aad-bd18-402c10fda5d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.113306 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.113341 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.113362 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:55.113398 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.030577856Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.030610248Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f109b015a90e533868cb926866da67552cd07699c3e4c1854c32cdba8060eedf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a629cac\x2d0d41\x2d4035\x2d9349\x2ddb3b89ac7ad8.mount has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.073309269Z" level=info msg="runSandbox: deleting pod ID 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487 from idIndex" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.073332243Z" level=info msg="runSandbox: removing pod sandbox 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.073345437Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.073356036Z" level=info msg="runSandbox: unmounting shmPath for sandbox 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.094441474Z" level=info msg="runSandbox: removing pod sandbox from storage: 43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.097731620Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:56.097750735Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4e12d6c7-2f44-4d54-b4a8-e774f1f0ea36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:56.097947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:56.097990 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:35:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:56.098013 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:35:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:56.098063 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(43893bb73b2b8c779eea18e93bfdb6761382898dbdc97ab67f0c58e168c05487): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:35:56 hub-master-0.workload.bos2.lab conmon[33542]: conmon 6b3b1e52cdbeeba69186 : container 33554 exited with status 1 Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope: Consumed 3.743s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope completed and consumed the indicated resources. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope has successfully entered the 'dead' state. Jan 23 16:35:56 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope: Consumed 60ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868.scope completed and consumed the indicated resources. Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.030176190Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.030211884Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount has successfully entered the 'dead' state. Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount has successfully entered the 'dead' state. Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d937d9bd\x2de136\x2d47d2\x2db3be\x2da926d8835628.mount has successfully entered the 'dead' state. Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.078306272Z" level=info msg="runSandbox: deleting pod ID deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5 from idIndex" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.078328842Z" level=info msg="runSandbox: removing pod sandbox deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.078341203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.078352502Z" level=info msg="runSandbox: unmounting shmPath for sandbox deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.098463255Z" level=info msg="runSandbox: removing pod sandbox from storage: deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.101549183Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.101566955Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=20de2146-99cc-4252-94d6-5064bb8b8b06 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:57.101795 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:57.101851 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:57.101881 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:35:57.101935 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(deea550440a9a9cb9b53837f94d2c567551efffaa0e1b430405a20fc6ec8a3e5): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:57.466138 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868" exitCode=1 Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:57.466164 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868} Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:57.466186 8631 scope.go:115] "RemoveContainer" containerID="274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da" Jan 23 16:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:57.466469 8631 scope.go:115] "RemoveContainer" containerID="6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.466818759Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=a3d66697-f252-4201-a19f-4dd5125b1b1f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.466955923Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a3d66697-f252-4201-a19f-4dd5125b1b1f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.467093609Z" level=info msg="Removing container: 274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da" id=3a512098-492d-48f6-a285-bb706ab1d3a3 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.467364813Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=c144eb44-6217-416c-80a6-81edb3f54f16 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.467482735Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c144eb44-6217-416c-80a6-81edb3f54f16 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.468157573Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=804a41be-583f-49e6-aa2f-e8a42582b294 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.468264387Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-b0ff71bdbee5493abad9392e45ef33c090934512b5541abfbcba747aef581c89-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-b0ff71bdbee5493abad9392e45ef33c090934512b5541abfbcba747aef581c89-merged.mount has successfully entered the 'dead' state. Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-b0ff71bdbee5493abad9392e45ef33c090934512b5541abfbcba747aef581c89-merged.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-b0ff71bdbee5493abad9392e45ef33c090934512b5541abfbcba747aef581c89-merged.mount completed and consumed the indicated resources. Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope. -- Subject: Unit crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.529004009Z" level=info msg="Removed container 274b97d85b2bd8c34760ecf318bb59a9f5303320c959a3423a35691d137a63da: openshift-multus/multus-cdt6c/kube-multus" id=3a512098-492d-48f6-a285-bb706ab1d3a3 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:35:57 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1. -- Subject: Unit crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.627778787Z" level=info msg="Created container b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1: openshift-multus/multus-cdt6c/kube-multus" id=804a41be-583f-49e6-aa2f-e8a42582b294 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.628141340Z" level=info msg="Starting container: b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1" id=cc433c1c-d01e-4c92-ad1f-604e93b82f42 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.646878794Z" level=info msg="Started container" PID=51481 containerID=b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1 description=openshift-multus/multus-cdt6c/kube-multus id=cc433c1c-d01e-4c92-ad1f-604e93b82f42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.651445320Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_fec43ef2-5b88-4ab6-95fe-a941fde5b436\"" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.660723317Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.660743305Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.672515347Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.681998775Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.682021475Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:57.682034497Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_fec43ef2-5b88-4ab6-95fe-a941fde5b436\"" Jan 23 16:35:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:35:58.144326839Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:35:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:35:58.469282 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1} Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.033323470Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.033359870Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount has successfully entered the 'dead' state. Jan 23 16:36:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount has successfully entered the 'dead' state. Jan 23 16:36:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-99f92e74\x2df60e\x2d4553\x2dbbf0\x2dcf06e89dfbcc.mount has successfully entered the 'dead' state. Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.072289711Z" level=info msg="runSandbox: deleting pod ID 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c from idIndex" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.072315218Z" level=info msg="runSandbox: removing pod sandbox 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.072328670Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.072339790Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.084467521Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.087862094Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:00.087880562Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5c9516b9-691e-4ab1-893a-ae04ddb15ca3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:00.088063 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:00.088201 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:36:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:00.088227 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:36:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:00.088276 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b62803ce41136db7c5c56c993a538cde0fdfa63283730b1f8c3499c0ada744c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:36:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:02.996229 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:02.996599115Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:02.996640016Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.011638028Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/e873befc-f37a-41a3-98e8-16846c875614 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.011829858Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:03.995716 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:36:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:03.995788 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.996077328Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.996120330Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.996151194Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:03.996180811Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.009730778Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/6fe93b99-b035-419e-a0b1-d4567ca089d3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.009752497Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.011854008Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d5bb5c34-3429-4101-8951-0bd76b3e1825 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.011879429Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:04.995881 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:04.996002 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.996377213Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.996432316Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.996464664Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:04.996510262Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.011557664Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/70d467df-81a6-4513-bb79-efdc80952c6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.011577253Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.011787737Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b698f6f7-b5a6-4a03-8bd4-03171e258f01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.011808503Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:05.995835 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.996151002Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:05.996184704Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:06.007284563Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/75367401-4fde-4c90-9642-d587a3ef0b32 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:06.007303388Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.436087216Z" level=info msg="NetworkStart: stopping network for sandbox 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.436296794Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/edf2537c-f21e-4f8c-946f-d97ca8bc5a9a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.436323137Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.436330815Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.436337840Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.437789007Z" level=info msg="NetworkStart: stopping network for sandbox 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.437890638Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3d3eb830-e930-447e-b6b4-ef479b536ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.437911953Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.437918079Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.437925630Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.438337839Z" level=info msg="NetworkStart: stopping network for sandbox 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.438491727Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5dbedc19-92fe-4176-bdc3-933eac19b6be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.438518834Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.438526573Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.438534272Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440406761Z" level=info msg="NetworkStart: stopping network for sandbox 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440431122Z" level=info msg="NetworkStart: stopping network for sandbox 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440520706Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f0ce3220-8135-4b29-be07-aad31d86bc53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440541387Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440546938Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440552611Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/62d63642-f9d5-43f1-b3c9-6603a5504dc6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440552744Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440578216Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440591973Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:08.440598344Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:08.996513 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:08.997022 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:36:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:09.995908 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:36:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:09.995983 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:36:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:09.996149 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996271633Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996311745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996390725Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996420191Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996458361Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:09.996482117Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.021514428Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a554125c-a79f-4baa-b658-98ea7110eaa3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.021544372Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.023285596Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/5f7d3f45-2e78-4293-a385-4337728de1b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.023304074Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.024000001Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/281d76ee-b28e-46d4-9d89-d034cd48ca9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:10.024019683Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:12.996106 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:36:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:12.996597148Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:12.996645844Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:13.007593258Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ee9947ec-16ec-4982-bb52-04d454e78ea8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:13.007613656Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:14.996320 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:36:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:14.996702646Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:14.996759241Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:15.008109164Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/768e5e38-a4c5-4278-9ff1-976e08a3b782 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:15.008129877Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:21.996517 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:36:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:21.997032 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864323 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864346 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864352 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864359 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864365 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864371 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:27.864377 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:36:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:28.143380526Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:36:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:34.996657 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:36:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:34.997318 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:36:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:35.025246992Z" level=info msg="NetworkStart: stopping network for sandbox 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:35.025452660Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e122b44e-0563-4b40-ac2c-bae9e8ffb76a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:35.025479474Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:35.025487320Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:35.025494924Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:48.026096943Z" level=info msg="NetworkStart: stopping network for sandbox bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:48.026278436Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/e873befc-f37a-41a3-98e8-16846c875614 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:48.026302465Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:48.026310272Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:48.026318445Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024049013Z" level=info msg="NetworkStart: stopping network for sandbox 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024184382Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/6fe93b99-b035-419e-a0b1-d4567ca089d3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024211738Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024220363Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024226641Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024664325Z" level=info msg="NetworkStart: stopping network for sandbox 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024815729Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d5bb5c34-3429-4101-8951-0bd76b3e1825 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024842808Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024851327Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:49.024857784Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:49.997051 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:36:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:49.997548 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025091618Z" level=info msg="NetworkStart: stopping network for sandbox 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025126155Z" level=info msg="NetworkStart: stopping network for sandbox c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025236014Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b698f6f7-b5a6-4a03-8bd4-03171e258f01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025258777Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/70d467df-81a6-4513-bb79-efdc80952c6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025260366Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025295930Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025303334Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025290723Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025365694Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:50.025371924Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:51.019270578Z" level=info msg="NetworkStart: stopping network for sandbox e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:51.019420283Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/75367401-4fde-4c90-9642-d587a3ef0b32 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:51.019443815Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:51.019451589Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:51.019458052Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.448113853Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.448150872Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.449394605Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.449426709Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.449824036Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.449864884Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.451810685Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.451847349Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.452197700Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.452245103Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount has successfully entered the 'dead' state. Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514303389Z" level=info msg="runSandbox: deleting pod ID 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489 from idIndex" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514342311Z" level=info msg="runSandbox: removing pod sandbox 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514310945Z" level=info msg="runSandbox: deleting pod ID 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786 from idIndex" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514385742Z" level=info msg="runSandbox: removing pod sandbox 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514396453Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514410654Z" level=info msg="runSandbox: unmounting shmPath for sandbox 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514357899Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514479918Z" level=info msg="runSandbox: deleting pod ID 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2 from idIndex" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514494561Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514530576Z" level=info msg="runSandbox: removing pod sandbox 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514568636Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.514585104Z" level=info msg="runSandbox: unmounting shmPath for sandbox 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.515310407Z" level=info msg="runSandbox: deleting pod ID 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee from idIndex" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.515335401Z" level=info msg="runSandbox: removing pod sandbox 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.515349697Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.515366136Z" level=info msg="runSandbox: unmounting shmPath for sandbox 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.522324167Z" level=info msg="runSandbox: deleting pod ID 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1 from idIndex" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.522352814Z" level=info msg="runSandbox: removing pod sandbox 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.522366686Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.522377075Z" level=info msg="runSandbox: unmounting shmPath for sandbox 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.530439186Z" level=info msg="runSandbox: removing pod sandbox from storage: 015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.531474318Z" level=info msg="runSandbox: removing pod sandbox from storage: 3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.533361505Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.533381257Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=aa805da4-e2ea-4fbb-9f7a-ab17bd0abfb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.533512 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.533555 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.533578 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.533626 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.536844251Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.536863773Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c9e932b0-8db7-4dd3-a4e5-12068d3ab86c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.537076 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.537118 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.537142 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.537189 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.537545141Z" level=info msg="runSandbox: removing pod sandbox from storage: 56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.537567728Z" level=info msg="runSandbox: removing pod sandbox from storage: 954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.540933931Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.540953382Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=59ebfb85-4ff0-4659-aefd-3b4e1977f3ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.541116 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.541150 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.541173 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.541219 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.543997820Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.544015969Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f9434d83-b13f-4b9f-b0a2-c3bff751d671 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.544183 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.544224 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.544246 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.544296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.546429751Z" level=info msg="runSandbox: removing pod sandbox from storage: 064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.549689302Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.549708023Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b280a04e-fcb5-4e74-83ae-b08e07c3ece8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.549937 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.549971 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.549991 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:36:53.550031 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:53.574173 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:53.574278 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:53.574434 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574439719Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574476945Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574536514Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574568130Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:53.574553 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574667346Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574698963Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:36:53.574737 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574772535Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.574801718Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.575047546Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.575072494Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.601355746Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/95108276-1328-43bf-9aba-a3280225f09d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.601380146Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.605594396Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c300a643-f844-4f8c-a26b-aeb05fbabac9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.605616145Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.607113699Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/e4d157e5-6e38-440b-81dd-a8483ab38b01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.607135109Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.607732394Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/97aa8a23-2dcf-458e-b5e1-ab07e5a70438 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.607752885Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.611688321Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/22abb02f-eaa9-4292-b090-722fe2ced394 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:53.611711937Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f0ce3220\x2d8135\x2d4b29\x2dbe07\x2daad31d86bc53.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-62d63642\x2df9d5\x2d43f1\x2db3c9\x2d6603a5504dc6.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5dbedc19\x2d92fe\x2d4176\x2dbdc3\x2d933eac19b6be.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-edf2537c\x2df21e\x2d4f8c\x2d946f\x2dd97ca8bc5a9a.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-064bc7bf45db8eb5d51896e88c50491c6fc19dffca028459de3260d87defaeb1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d3eb830\x2de930\x2d447e\x2db6b4\x2def479b536ace.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-015cfb1eb7bc2b2196766118cb84aa6f469184bd17c3c2cfeee74054d1c0fbc2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-56b9f244e1089dbd6cf70999694dac5b5f593f1dd373d1aa5592084d19ee4aee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-954e3705a7ea477eda5121545cc36aff27f201f6d5d6421d31cb23328eaa7786-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3bfa3c697c8ff9f34ae609a7ed950162ecb4a17853496b0c440a7c3486b1e489-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.033870767Z" level=info msg="NetworkStart: stopping network for sandbox a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034018497Z" level=info msg="NetworkStart: stopping network for sandbox 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034025460Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/5f7d3f45-2e78-4293-a385-4337728de1b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034119522Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034129100Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034134876Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a554125c-a79f-4baa-b658-98ea7110eaa3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034161321Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034167804Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034174254Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.034137824Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.037171120Z" level=info msg="NetworkStart: stopping network for sandbox 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.037302280Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/281d76ee-b28e-46d4-9d89-d034cd48ca9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.037324648Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.037332018Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:55.037337528Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.020703497Z" level=info msg="NetworkStart: stopping network for sandbox 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.020844262Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ee9947ec-16ec-4982-bb52-04d454e78ea8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.020867180Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.020873655Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.020879305Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:36:58.144322893Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:00.020712267Z" level=info msg="NetworkStart: stopping network for sandbox 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:00.020864718Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/768e5e38-a4c5-4278-9ff1-976e08a3b782 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:00.020888618Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:00.020896837Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:00.020903345Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:04.996038 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.996773222Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=c4afb2a0-00bd-40ac-80be-eb748deabb59 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.996934429Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c4afb2a0-00bd-40ac-80be-eb748deabb59 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.997426910Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d7627151-98ea-4c38-9f6d-63c2f21fbe1b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.997555011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d7627151-98ea-4c38-9f6d-63c2f21fbe1b name=/runtime.v1.ImageService/ImageStatus Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.998392125Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=19bdefa3-91a6-4cde-8b16-9303fc526a52 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:04.998470133Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope. -- Subject: Unit crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc. -- Subject: Unit crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.113192688Z" level=info msg="Created container 4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=19bdefa3-91a6-4cde-8b16-9303fc526a52 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.113616448Z" level=info msg="Starting container: 4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" id=53250c3b-ea11-44e0-9819-96a3336100b5 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.120913168Z" level=info msg="Started container" PID=53739 containerID=4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=53250c3b-ea11-44e0-9819-96a3336100b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.125711251Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.135770203Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.135790564Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.135801337Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.144639196Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.144659686Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.144672363Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.153508053Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.153526710Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.153535734Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.161864283Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.161882039Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.161891707Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.170617178Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:37:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:05.170637433Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:37:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:05.598487 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/182.log" Jan 23 16:37:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:05.599722 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc} Jan 23 16:37:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:05.600069 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:37:05 hub-master-0.workload.bos2.lab conmon[53718]: conmon 4f80c9a9daeb183a4de1 : container 53739 exited with status 1 Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has successfully entered the 'dead' state. Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope: Consumed 558ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope completed and consumed the indicated resources. Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope has successfully entered the 'dead' state. Jan 23 16:37:05 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc.scope completed and consumed the indicated resources. Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.602691 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/183.log" Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.603163 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/182.log" Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.604189 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" exitCode=1 Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.604218 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc} Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.604241 8631 scope.go:115] "RemoveContainer" containerID="42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" Jan 23 16:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:06.605060826Z" level=info msg="Removing container: 42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5" id=5e64895f-17a0-4085-a556-46c79e6e21f8 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:06.605109 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:06.605604 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:06 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-7a2ab068daf56fed3f31f4c02b7dc719bc1e61b20a4d64b451527b945f73f5e1-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7a2ab068daf56fed3f31f4c02b7dc719bc1e61b20a4d64b451527b945f73f5e1-merged.mount has successfully entered the 'dead' state. Jan 23 16:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:06.645736111Z" level=info msg="Removed container 42e86a61d7d742f8acbddb6259fca6b96d44195d615374c0ee17b241584d6ec5: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=5e64895f-17a0-4085-a556-46c79e6e21f8 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:37:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:07.607502 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/183.log" Jan 23 16:37:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:07.609341 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:07.609831 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:17.997330 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:17.997861 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.525922 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-7tt4b] Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.525957 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.532412 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-7tt4b] Jan 23 16:37:18 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice. -- Subject: Unit kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.646425 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjg7\" (UniqueName: \"kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.646456 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.646480 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.646496 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747241 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-pnjg7\" (UniqueName: \"kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747278 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747306 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747325 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747407 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747525 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.747732 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.761872 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnjg7\" (UniqueName: \"kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7\") pod \"cni-sysctl-allowlist-ds-7tt4b\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:18.843161 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:37:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:18.843557145Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-7tt4b/POD" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:18.843802857Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:18.856795482Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-7tt4b Namespace:openshift-multus ID:811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1 UID:411e1ce1-49d3-46d7-827f-dbb454e1e01e NetNS:/var/run/netns/0b1e5efc-42a5-4249-8e04-8e5a1f5fa804 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:18.856822042Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-7tt4b to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.038404961Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.038442175Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount has successfully entered the 'dead' state. Jan 23 16:37:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount has successfully entered the 'dead' state. Jan 23 16:37:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e122b44e\x2d0563\x2d4b40\x2dac2c\x2dbae9e8ffb76a.mount has successfully entered the 'dead' state. Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.097292347Z" level=info msg="runSandbox: deleting pod ID 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab from idIndex" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.097320276Z" level=info msg="runSandbox: removing pod sandbox 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.097335005Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.097346911Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.116463025Z" level=info msg="runSandbox: removing pod sandbox from storage: 5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.119783796Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:20.119804370Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=35d49135-a7c7-47c9-b9d1-0980b1aac8a8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:20.120034 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:20.120086 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:20.120111 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:20.120162 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5109d2c3e770d5a95a0cdbd3f4c6442ebaf11c9c44e4ce06be97f557381574ab): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865414 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865552 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865560 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865566 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865575 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865582 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:27.865588 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:37:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:28.143118024Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:37:28 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00096|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 16:37:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:31.997084 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:31.997601 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.037378337Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.037424673Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount has successfully entered the 'dead' state. Jan 23 16:37:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount has successfully entered the 'dead' state. Jan 23 16:37:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e873befc\x2df37a\x2d41a3\x2d98e8\x2d16846c875614.mount has successfully entered the 'dead' state. Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.093363972Z" level=info msg="runSandbox: deleting pod ID bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827 from idIndex" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.093388040Z" level=info msg="runSandbox: removing pod sandbox bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.093409026Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.093424118Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.109421593Z" level=info msg="runSandbox: removing pod sandbox from storage: bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.113420667Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:33.113439108Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7aba08f2-517a-4056-a335-3f67e604dfbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:33.113669 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:33.113717 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:33.113742 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:33.113790 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(bb61e00037132e559f048c142eb633be14b9a9bccaa050132c37db56208e2827): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.035148212Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.035348998Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.035280794Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.035444447Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d5bb5c34\x2d3429\x2d4101\x2d8951\x2d0bd76b3e1825.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6fe93b99\x2db035\x2d419e\x2da0b1\x2dd4567ca089d3.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.082309658Z" level=info msg="runSandbox: deleting pod ID 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc from idIndex" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.082337530Z" level=info msg="runSandbox: removing pod sandbox 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.082355142Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.082367309Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.083290852Z" level=info msg="runSandbox: deleting pod ID 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a from idIndex" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.083315830Z" level=info msg="runSandbox: removing pod sandbox 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.083328534Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.083341794Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.098469665Z" level=info msg="runSandbox: removing pod sandbox from storage: 4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.098474930Z" level=info msg="runSandbox: removing pod sandbox from storage: 3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.101790432Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.101808031Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=7bc08fe7-1f14-424a-9bd3-7aa60552aab1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.102076 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.102123 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.102156 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.102203 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4050a0428edac4606c62a38be7746cbc22c5b7e439f1051356755286560d26dc): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.104843189Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.104862300Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=27cb7ca9-56b3-44d7-9139-a6198de310d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.105072 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.105112 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.105136 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:34.105184 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:37:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:34.996337 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.996717734Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:34.996762412Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.007412932Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5a5a1151-adec-40bb-9373-0c70e721401d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.007434136Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.036554508Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.036592929Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.036600057Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.036635729Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3fde445bf7c54ee423c4f5712df8b9eeddd8584311fcd5a931e29b380c47c50a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b698f6f7\x2db5a6\x2d4a03\x2d8bd4\x2d03171e258f01.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-70d467df\x2d81a6\x2d4513\x2dbb79\x2defdc80952c6e.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.087295641Z" level=info msg="runSandbox: deleting pod ID 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a from idIndex" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.087319697Z" level=info msg="runSandbox: removing pod sandbox 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.087334114Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.087346734Z" level=info msg="runSandbox: unmounting shmPath for sandbox 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.088314382Z" level=info msg="runSandbox: deleting pod ID c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1 from idIndex" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.088339054Z" level=info msg="runSandbox: removing pod sandbox c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.088353672Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.088365495Z" level=info msg="runSandbox: unmounting shmPath for sandbox c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.112470006Z" level=info msg="runSandbox: removing pod sandbox from storage: 33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.112491065Z" level=info msg="runSandbox: removing pod sandbox from storage: c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.115559927Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.115578066Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e03409e4-aef8-4a5b-a612-19b1b2280be7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.115806 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.115849 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.115873 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.115920 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.120981812Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:35.121007436Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=56d84e68-c9fd-4fb4-ab21-ab26ab445bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.121256 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.121290 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.121311 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:35.121350 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(33a528375bc266ae4e23ae7e08132cc05b06f104ba70cedd73ec4a48ecf4934a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.030195903Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.030243578Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount has successfully entered the 'dead' state. Jan 23 16:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c158a45907c14a46ebe178f294ab2d80c68407ee4fef0a7f208b807b3ced64a1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount has successfully entered the 'dead' state. Jan 23 16:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-75367401\x2d4fde\x2d4c90\x2d9642\x2dd587a3ef0b32.mount has successfully entered the 'dead' state. Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.073427571Z" level=info msg="runSandbox: deleting pod ID e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5 from idIndex" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.073453671Z" level=info msg="runSandbox: removing pod sandbox e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.073467701Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.073479471Z" level=info msg="runSandbox: unmounting shmPath for sandbox e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.080435226Z" level=info msg="runSandbox: removing pod sandbox from storage: e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.083936426Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:36.083953967Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=80c6da05-6b04-44d1-8dc0-d59b060dd715 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:36.084144 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:36.084192 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:36.084226 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:36.084281 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e74a7684bbebd1c1f78241fe3c74e78dd386d04c403b00c4dba152f70622edb5): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.615401770Z" level=info msg="NetworkStart: stopping network for sandbox f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.615547785Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/95108276-1328-43bf-9aba-a3280225f09d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.615573549Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.615580118Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.615587145Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621492544Z" level=info msg="NetworkStart: stopping network for sandbox 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621545492Z" level=info msg="NetworkStart: stopping network for sandbox 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621561429Z" level=info msg="NetworkStart: stopping network for sandbox 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621622746Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/97aa8a23-2dcf-458e-b5e1-ab07e5a70438 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621648218Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621656481Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621664440Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621711437Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c300a643-f844-4f8c-a26b-aeb05fbabac9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621740378Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/e4d157e5-6e38-440b-81dd-a8483ab38b01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621740534Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621794352Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621804793Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621769498Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621873399Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.621882039Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.624140232Z" level=info msg="NetworkStart: stopping network for sandbox d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.624281861Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/22abb02f-eaa9-4292-b090-722fe2ced394 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.624308121Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.624316634Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:38.624323789Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.045112193Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.045159247Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.045238862Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.045272569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.046886163Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.046920932Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount has successfully entered the 'dead' state. Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089305570Z" level=info msg="runSandbox: deleting pod ID a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68 from idIndex" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089339098Z" level=info msg="runSandbox: removing pod sandbox a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089356269Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089308219Z" level=info msg="runSandbox: deleting pod ID 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979 from idIndex" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089391722Z" level=info msg="runSandbox: removing pod sandbox 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089399970Z" level=info msg="runSandbox: unmounting shmPath for sandbox a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089405034Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.089555153Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.098312394Z" level=info msg="runSandbox: deleting pod ID 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f from idIndex" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.098337569Z" level=info msg="runSandbox: removing pod sandbox 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.098351127Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.098364579Z" level=info msg="runSandbox: unmounting shmPath for sandbox 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.105470253Z" level=info msg="runSandbox: removing pod sandbox from storage: 6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.109086137Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.109105148Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=f310e052-9ad9-44a4-b536-b9e11afe0b01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.109352 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.109399 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.109422 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.109473 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.113444752Z" level=info msg="runSandbox: removing pod sandbox from storage: a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.116589093Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.116608972Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=062b3c41-5766-4c83-99c7-fb90b85570e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.116800 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.116833 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.116853 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.116891 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.117444110Z" level=info msg="runSandbox: removing pod sandbox from storage: 85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.120739579Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:40.120758570Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=f93bc917-5ce2-406a-9c5e-e637cfb4c1cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.120958 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.120993 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.121013 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:40.121051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-281d76ee\x2db28e\x2d46d4\x2d9d89\x2dd034cd48ca9b.mount has successfully entered the 'dead' state. Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f7d3f45\x2d2e78\x2d4293\x2da385\x2d4337728de1b6.mount has successfully entered the 'dead' state. Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a554125c\x2da79f\x2d4baa\x2db658\x2d98ea7110eaa3.mount has successfully entered the 'dead' state. Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a64fb5cdc5504fe169f21bddb7177c97f15c6c2bb4fdaa505176bc5d747d9f68-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-85165240859c6867275fbd8b01b8369163c7cc4736e247fcec61e1618233754f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6c07327d5a23ca8f8a78026cc3dc60610e9d04f1a6491df6ae77eef738f77979-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.032885266Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.032924492Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount has successfully entered the 'dead' state. Jan 23 16:37:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount has successfully entered the 'dead' state. Jan 23 16:37:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee9947ec\x2d16ec\x2d4982\x2dbb52\x2d04d454e78ea8.mount has successfully entered the 'dead' state. Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.071330190Z" level=info msg="runSandbox: deleting pod ID 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc from idIndex" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.071355688Z" level=info msg="runSandbox: removing pod sandbox 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.071371814Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.071385317Z" level=info msg="runSandbox: unmounting shmPath for sandbox 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.095482470Z" level=info msg="runSandbox: removing pod sandbox from storage: 55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.098914810Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:43.098933069Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=257f6add-3010-487a-8555-3a1745c5d309 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:43.099133 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:43.099184 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:43.099212 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:43.099259 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(55818a7cec56f4e1e6e12639101b413ae50e789b2a587427cbe1187e6ea27dbc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:43.996345 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:43.996855 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.032925648Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.032971620Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount has successfully entered the 'dead' state. Jan 23 16:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount has successfully entered the 'dead' state. Jan 23 16:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-768e5e38\x2da4c5\x2d4278\x2d9ff1\x2d976e08a3b782.mount has successfully entered the 'dead' state. Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.085290533Z" level=info msg="runSandbox: deleting pod ID 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6 from idIndex" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.085323825Z" level=info msg="runSandbox: removing pod sandbox 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.085340615Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.085357666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.112472923Z" level=info msg="runSandbox: removing pod sandbox from storage: 47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.115913152Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.115932329Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c5020f26-e32e-4941-8cb4-e03fb50a0d08 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:45.116146 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:45.116307 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:45.116329 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:45.116379 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(47873e603949815124f5aaa6c8bdde6e5a6b33ecc37196b5db71d105840ac5f6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:45.995847 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:45.996000 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:45.996299 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996297782Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996334105Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996354461Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996374543Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996684475Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:45.996722044Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.015820940Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/00138474-33fd-4c04-b522-16e654bb5482 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.015842322Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.017017136Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/09cb9517-42d3-4819-8afb-a95f157f6bfd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.017037460Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.017497218Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/78b2848d-831f-4cf0-bfe0-6833a6682694 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:46.017516536Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:47.996619 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:47.996725 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:47.997000885Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:47.997046514Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:47.997137777Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:47.997184314Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.018191303Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/3d68603a-5d0b-4b41-aeb9-4a6b3871c936 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.018226548Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.018770412Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/04fcac82-c5c6-4409-93f7-9bd74b4df0fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.018794595Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:48.996135 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.996496961Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:48.996540269Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:49.007658856Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/37ada9f8-e799-4bed-ac94-4385aeb5e9ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:49.007684013Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:51.995714 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:51.996183411Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:51.996401638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:52.008268572Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ad5f72f0-f4c5-4956-a682-cd5deb7dbee6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:52.008290088Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:52.996019 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:52.996378053Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:52.996416430Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:53.007140221Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ec932ae4-29bb-48e5-a969-4be11d00f7d6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:53.007162593Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:53.995967 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:37:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:53.996261980Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:53.996301787Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:54.007341065Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/0683574e-430a-4493-b718-85d715e1667f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:54.007359978Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:54.995476 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:37:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:54.995790269Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:54.995824748Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:55.008021619Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/cca63a19-41ad-4e13-9880-40301b8d2df4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:55.008044560Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:37:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:55.996914 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:37:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:37:55.997460 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:58.143563527Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:37:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:37:58.995646 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:58.996110739Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:58.996159804Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:59.011339085Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/e42c648c-9eb6-4283-922c-eef97f9175eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:37:59.011361668Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:03.873362057Z" level=info msg="NetworkStart: stopping network for sandbox 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:03.873532038Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-7tt4b Namespace:openshift-multus ID:811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1 UID:411e1ce1-49d3-46d7-827f-dbb454e1e01e NetNS:/var/run/netns/0b1e5efc-42a5-4249-8e04-8e5a1f5fa804 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:03.873558389Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:03.873565021Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:03.873571324Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-7tt4b from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:06.996537 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:38:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:06.997061 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1440] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1445] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1446] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1448] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1453] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491888.1457] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:38:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491889.9873] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:38:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:17.998109 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:38:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:17.998674 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:38:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:18.539806 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-7tt4b] Jan 23 16:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:20.021874428Z" level=info msg="NetworkStart: stopping network for sandbox 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:20.022264784Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5a5a1151-adec-40bb-9373-0c70e721401d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:20.022291863Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:20.022298667Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:20.022307149Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.627877332Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.627915266Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.633117828Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.633148229Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.633614790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.633648892Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.634159061Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.634193497Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.635640416Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.635667937Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount has successfully entered the 'dead' state. Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687359128Z" level=info msg="runSandbox: deleting pod ID f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f from idIndex" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687409121Z" level=info msg="runSandbox: removing pod sandbox f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687423653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687438131Z" level=info msg="runSandbox: unmounting shmPath for sandbox f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687363417Z" level=info msg="runSandbox: deleting pod ID 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e from idIndex" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687482642Z" level=info msg="runSandbox: removing pod sandbox 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687495473Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687508848Z" level=info msg="runSandbox: unmounting shmPath for sandbox 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687364118Z" level=info msg="runSandbox: deleting pod ID 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c from idIndex" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687556835Z" level=info msg="runSandbox: removing pod sandbox 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687574195Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.687588428Z" level=info msg="runSandbox: unmounting shmPath for sandbox 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.695304609Z" level=info msg="runSandbox: deleting pod ID d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f from idIndex" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.695330874Z" level=info msg="runSandbox: removing pod sandbox d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.695342379Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.695353437Z" level=info msg="runSandbox: unmounting shmPath for sandbox d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.696314541Z" level=info msg="runSandbox: deleting pod ID 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94 from idIndex" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.696338204Z" level=info msg="runSandbox: removing pod sandbox 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.696371524Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.696383156Z" level=info msg="runSandbox: unmounting shmPath for sandbox 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.703440454Z" level=info msg="runSandbox: removing pod sandbox from storage: f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.703458756Z" level=info msg="runSandbox: removing pod sandbox from storage: 88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.703512376Z" level=info msg="runSandbox: removing pod sandbox from storage: 34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.706669348Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.706691598Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fec3f9cc-5057-4bb2-be59-ec63dd69a0f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.706983 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.707143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.707169 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.707227 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.710186050Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.710237828Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=9be82278-1160-4a02-b401-ac9303cf987a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.710468 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.710500 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.710522 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.710558 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.711448407Z" level=info msg="runSandbox: removing pod sandbox from storage: d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.711477774Z" level=info msg="runSandbox: removing pod sandbox from storage: 152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.713624205Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.713644416Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=69b65a1a-4921-49b1-bf9b-da904ef9bd69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.713781 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.713812 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.713834 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.713870 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.716799722Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.716816972Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ef805e90-61dc-453d-aefa-6d75990da32a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.717013 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.717044 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.717065 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.717100 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.719778109Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.719796684Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=62f1331c-e789-4c0b-a33a-9e9610244286 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.720008 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.720041 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.720062 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:23.720101 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:23.753727 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:23.753878 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754038501Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754076550Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:23.754099 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:23.754132 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754186524Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:23.754225 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754228629Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754528329Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754545078Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754557160Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754587884Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754704097Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.754734879Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.786573618Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/4e978088-2000-4eb0-b783-9af6bc43ba71 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.786597191Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.787388175Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/282f70eb-6be8-4fb9-ab6e-dd0572761f19 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.787408895Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.788071017Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/05174d5d-b760-4f2a-b97c-4eea60d89938 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.788092767Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.789167681Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6b27f2c4-861c-45db-9c5d-d08453353a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.789192588Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.790030072Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f0cb1d95-0655-45ec-b4e5-6c2da18ddd04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:23.790051050Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-22abb02f\x2deaa9\x2d4292\x2db090\x2d722fe2ced394.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-97aa8a23\x2d2dcf\x2d458e\x2db5e1\x2dab07e5a70438.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e4d157e5\x2d6e38\x2d440b\x2d81dd\x2da8483ab38b01.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d5d0d9beb4b063131145deb574afdea24dc8e0d1088109574e97adf1f136387f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c300a643\x2df844\x2d4f8c\x2da26b\x2daeb05fbabac9.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-95108276\x2d1328\x2d43bf\x2d9aba\x2da3280225f09d.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-34429f43520a3cdba1bedf2890014e7fad6df56e706bf835860e602cf6619c0e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-88bb5208529bda4e4bbd1e8309e76af325f311916e9647bbeaf910ea9bb9f75c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-152fbfc8d77192c51ced5bf9de3ed67434feca79e71134c6f1a8bb81e9197b94-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f75ed65b7ade259e04e7df56d9659e4280e1a6b80a6bb6a2ecb9a41ea3580e4f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866291 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866313 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866320 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866326 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866332 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866339 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:27.866345 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:38:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:28.143860638Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:30.996169 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:30.996718 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031279810Z" level=info msg="NetworkStart: stopping network for sandbox b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031547584Z" level=info msg="NetworkStart: stopping network for sandbox ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031292787Z" level=info msg="NetworkStart: stopping network for sandbox e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031659256Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/78b2848d-831f-4cf0-bfe0-6833a6682694 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031683540Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031691605Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031697820Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031666173Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/00138474-33fd-4c04-b522-16e654bb5482 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031759502Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031767914Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031774186Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031741404Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/09cb9517-42d3-4819-8afb-a95f157f6bfd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031830836Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031837430Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:31.031842671Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031300455Z" level=info msg="NetworkStart: stopping network for sandbox fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031349181Z" level=info msg="NetworkStart: stopping network for sandbox 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031461352Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/3d68603a-5d0b-4b41-aeb9-4a6b3871c936 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031484425Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031490722Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031497504Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031496806Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/04fcac82-c5c6-4409-93f7-9bd74b4df0fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031589958Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031597393Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:33.031603327Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:34.021746447Z" level=info msg="NetworkStart: stopping network for sandbox e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:34.021913756Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/37ada9f8-e799-4bed-ac94-4385aeb5e9ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:34.021939290Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:34.021946497Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:34.021954381Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:37.021894466Z" level=info msg="NetworkStart: stopping network for sandbox d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:37.022036255Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ad5f72f0-f4c5-4956-a682-cd5deb7dbee6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:37.022058300Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:37.022064852Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:37.022071013Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:38.019224523Z" level=info msg="NetworkStart: stopping network for sandbox 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:38.019371987Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ec932ae4-29bb-48e5-a969-4be11d00f7d6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:38.019396444Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:38.019404025Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:38.019410871Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:39.020456366Z" level=info msg="NetworkStart: stopping network for sandbox cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:39.020602464Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/0683574e-430a-4493-b718-85d715e1667f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:39.020624971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:39.020631892Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:39.020637963Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:40.020383869Z" level=info msg="NetworkStart: stopping network for sandbox cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:40.020531109Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/cca63a19-41ad-4e13-9880-40301b8d2df4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:40.020555312Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:40.020562561Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:40.020570055Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:43.996960 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:38:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:43.997608 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:38:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:44.025032401Z" level=info msg="NetworkStart: stopping network for sandbox 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:44.025169243Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/e42c648c-9eb6-4283-922c-eef97f9175eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:38:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:44.025191550Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:38:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:44.025197985Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:38:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:44.025209621Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.885043724Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-7tt4b_openshift-multus_411e1ce1-49d3-46d7-827f-dbb454e1e01e_0(811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1): error removing pod openshift-multus_cni-sysctl-allowlist-ds-7tt4b from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-7tt4b/411e1ce1-49d3-46d7-827f-dbb454e1e01e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.885092303Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount has successfully entered the 'dead' state. Jan 23 16:38:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount has successfully entered the 'dead' state. Jan 23 16:38:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0b1e5efc\x2d42a5\x2d4249\x2d8e04\x2d8e5a1f5fa804.mount has successfully entered the 'dead' state. Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.941358013Z" level=info msg="runSandbox: deleting pod ID 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1 from idIndex" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.941396864Z" level=info msg="runSandbox: removing pod sandbox 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.941416875Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.941431873Z" level=info msg="runSandbox: unmounting shmPath for sandbox 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.957463637Z" level=info msg="runSandbox: removing pod sandbox from storage: 811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.960421162Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-7tt4b_openshift-multus_411e1ce1-49d3-46d7-827f-dbb454e1e01e_0" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:48.960440939Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-7tt4b_openshift-multus_411e1ce1-49d3-46d7-827f-dbb454e1e01e_0" id=7b016812-09d0-4496-9877-bd76045b495a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:38:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:48.960711 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-7tt4b_openshift-multus_411e1ce1-49d3-46d7-827f-dbb454e1e01e_0(811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1): error adding pod openshift-multus_cni-sysctl-allowlist-ds-7tt4b to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-7tt4b/411e1ce1-49d3-46d7-827f-dbb454e1e01e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:38:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:48.960761 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-7tt4b_openshift-multus_411e1ce1-49d3-46d7-827f-dbb454e1e01e_0(811b67944e5d1da083e571efba4fc2ea7a7f2bb25fe62d2fd5c256b7b9ef99f1): error adding pod openshift-multus_cni-sysctl-allowlist-ds-7tt4b to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-7tt4b/411e1ce1-49d3-46d7-827f-dbb454e1e01e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-7tt4b" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866722 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir\") pod \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866758 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnjg7\" (UniqueName: \"kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7\") pod \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866758 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "411e1ce1-49d3-46d7-827f-dbb454e1e01e" (UID: "411e1ce1-49d3-46d7-827f-dbb454e1e01e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866784 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready\") pod \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866805 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist\") pod \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\" (UID: \"411e1ce1-49d3-46d7-827f-dbb454e1e01e\") " Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.866886 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/411e1ce1-49d3-46d7-827f-dbb454e1e01e-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:38:49.866998 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/411e1ce1-49d3-46d7-827f-dbb454e1e01e/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.867032 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready" (OuterVolumeSpecName: "ready") pod "411e1ce1-49d3-46d7-827f-dbb454e1e01e" (UID: "411e1ce1-49d3-46d7-827f-dbb454e1e01e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:38:49.867043 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/411e1ce1-49d3-46d7-827f-dbb454e1e01e/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.867173 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "411e1ce1-49d3-46d7-827f-dbb454e1e01e" (UID: "411e1ce1-49d3-46d7-827f-dbb454e1e01e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:38:49 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-411e1ce1\x2d49d3\x2d46d7\x2d827f\x2ddbb454e1e01e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnjg7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-411e1ce1\x2d49d3\x2d46d7\x2d827f\x2ddbb454e1e01e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnjg7.mount has successfully entered the 'dead' state. Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.880756 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7" (OuterVolumeSpecName: "kube-api-access-pnjg7") pod "411e1ce1-49d3-46d7-827f-dbb454e1e01e" (UID: "411e1ce1-49d3-46d7-827f-dbb454e1e01e"). InnerVolumeSpecName "kube-api-access-pnjg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.967626 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-pnjg7\" (UniqueName: \"kubernetes.io/projected/411e1ce1-49d3-46d7-827f-dbb454e1e01e-kube-api-access-pnjg7\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.967645 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/411e1ce1-49d3-46d7-827f-dbb454e1e01e-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:38:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:49.967654 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/411e1ce1-49d3-46d7-827f-dbb454e1e01e-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:38:50 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice. -- Subject: Unit kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice has finished shutting down. Jan 23 16:38:50 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-pod411e1ce1_49d3_46d7_827f_dbb454e1e01e.slice completed and consumed the indicated resources. Jan 23 16:38:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:50.816496 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-7tt4b] Jan 23 16:38:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:50.817710 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-7tt4b] Jan 23 16:38:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:51.998650 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=411e1ce1-49d3-46d7-827f-dbb454e1e01e path="/var/lib/kubelet/pods/411e1ce1-49d3-46d7-827f-dbb454e1e01e/volumes" Jan 23 16:38:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:38:56.997004 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:38:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:38:56.997507 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:38:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:38:58.143090102Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:39:00 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00097|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.033188760Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.033240555Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount has successfully entered the 'dead' state. Jan 23 16:39:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount has successfully entered the 'dead' state. Jan 23 16:39:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a5a1151\x2dadec\x2d40bb\x2d9373\x2d0c70e721401d.mount has successfully entered the 'dead' state. Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.070305718Z" level=info msg="runSandbox: deleting pod ID 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3 from idIndex" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.070332942Z" level=info msg="runSandbox: removing pod sandbox 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.070346926Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.070358945Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.086453948Z" level=info msg="runSandbox: removing pod sandbox from storage: 9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.090050885Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:05.090070070Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3598a5f7-f672-4ee3-b263-7aed62cc350e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:05.090249 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:05.090447 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:05.090485 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:05.090561 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9cd83f640120aaf7dd8e1d5743fd87c2357a97dc152ffa7709299fac311b29b3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799464977Z" level=info msg="NetworkStart: stopping network for sandbox 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799621690Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/4e978088-2000-4eb0-b783-9af6bc43ba71 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799646047Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799657005Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799664051Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.799982703Z" level=info msg="NetworkStart: stopping network for sandbox 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800141267Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/282f70eb-6be8-4fb9-ab6e-dd0572761f19 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800169601Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800178817Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800186389Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800829154Z" level=info msg="NetworkStart: stopping network for sandbox 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800959481Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/05174d5d-b760-4f2a-b97c-4eea60d89938 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800982847Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800989656Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.800995968Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.802008667Z" level=info msg="NetworkStart: stopping network for sandbox 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.802121453Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6b27f2c4-861c-45db-9c5d-d08453353a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.802144812Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.802151680Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.802157997Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.804119777Z" level=info msg="NetworkStart: stopping network for sandbox f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.804222921Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f0cb1d95-0655-45ec-b4e5-6c2da18ddd04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.804245508Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.804254151Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:39:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:08.804261135Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:10.996130 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:39:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:10.996633 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.042997388Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.043260350Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.043280368Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.043318156Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.043013790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.043423050Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-78b2848d\x2d831f\x2d4cf0\x2dbfe0\x2d6833a6682694.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-09cb9517\x2d42d3\x2d4819\x2d8afb\x2da95f157f6bfd.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-00138474\x2d33fd\x2d4c04\x2db522\x2d16e654bb5482.mount has successfully entered the 'dead' state. Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098338313Z" level=info msg="runSandbox: deleting pod ID b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285 from idIndex" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098368228Z" level=info msg="runSandbox: removing pod sandbox b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098381728Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098395821Z" level=info msg="runSandbox: unmounting shmPath for sandbox b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098341121Z" level=info msg="runSandbox: deleting pod ID ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791 from idIndex" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098470567Z" level=info msg="runSandbox: removing pod sandbox ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098486244Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098503453Z" level=info msg="runSandbox: unmounting shmPath for sandbox ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098344369Z" level=info msg="runSandbox: deleting pod ID e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73 from idIndex" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098589152Z" level=info msg="runSandbox: removing pod sandbox e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098606244Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.098619454Z" level=info msg="runSandbox: unmounting shmPath for sandbox e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.110457356Z" level=info msg="runSandbox: removing pod sandbox from storage: e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.113890132Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.113907093Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d3f00443-b91a-4286-8fb7-0a0e10cfa92a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.114123 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.114185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.114218 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.114265 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.118441915Z" level=info msg="runSandbox: removing pod sandbox from storage: ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.118570046Z" level=info msg="runSandbox: removing pod sandbox from storage: b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.122024295Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.122044687Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=cfd40809-2555-436b-ba38-25d7e8c1f5a6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.122243 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.122276 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.122300 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.122340 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.125106169Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:16.125125042Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7ac3721e-4a5b-4d9e-a858-5dbfd788fb1d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.125343 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.125381 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.125404 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:39:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:16.125450 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:39:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e347e06b96673d9c2b121e3dbfa3c285da0ff3303c9d8e1ca6f736b2a7cbbc73-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b69e815d67a9ffd661d94a54854654f94c721fe18c90954a2410681eedb47285-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ef88d46ea2440e041fee5e607a5a26e5a62a0f7b28e29500e5e0482d8102b791-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:17.996390 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:17.996716207Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:17.996753720Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.009088623Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/03d12329-66a6-4f65-8c7e-862aa95bee27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.009108090Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.042639697Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.042671446Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.042908865Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.042938632Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d68603a\x2d5d0b\x2d4b41\x2daeb9\x2d4a6b3871c936.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-04fcac82\x2dc5c6\x2d4409\x2d93f7\x2d9bd74b4df0fc.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093323329Z" level=info msg="runSandbox: deleting pod ID fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad from idIndex" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093348729Z" level=info msg="runSandbox: removing pod sandbox fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093363078Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093375659Z" level=info msg="runSandbox: unmounting shmPath for sandbox fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093325312Z" level=info msg="runSandbox: deleting pod ID 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1 from idIndex" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093432422Z" level=info msg="runSandbox: removing pod sandbox 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093444813Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.093458900Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.107441526Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.107458958Z" level=info msg="runSandbox: removing pod sandbox from storage: fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.110272305Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.110290137Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5c6d55d4-0d13-4799-bc98-8735be698334 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.110530 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.110572 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.110594 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.110643 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(fff8faac729bebf0ae137766f6ddb729d28c8de1f1f462b8030c60ac4ea397ad): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.113432115Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:18.113449331Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e45d2b65-6e37-4ec6-b510-5c3911335690 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.113624 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.113664 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.113686 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:18.113729 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3ed6b9f09300f318588518d2f6a449f1e07343450cee52404b7f4a0f178da4d1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.032977325Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.033016081Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount has successfully entered the 'dead' state. Jan 23 16:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount has successfully entered the 'dead' state. Jan 23 16:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-37ada9f8\x2de799\x2d4bed\x2dac94\x2d4385aeb5e9ad.mount has successfully entered the 'dead' state. Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.077288806Z" level=info msg="runSandbox: deleting pod ID e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b from idIndex" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.077316073Z" level=info msg="runSandbox: removing pod sandbox e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.077336144Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.077352671Z" level=info msg="runSandbox: unmounting shmPath for sandbox e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.092418370Z" level=info msg="runSandbox: removing pod sandbox from storage: e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.099839747Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:19.099863516Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d0579092-fcba-441c-9ccb-a0fa2bb030f9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:19.100083 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:19.100133 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:19.100157 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:19.100213 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e6f0d93439eb50ba55f4b6f024836c42558f18678e6ce706978af343ca7d6d7b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.031959950Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.032003629Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount has successfully entered the 'dead' state. Jan 23 16:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount has successfully entered the 'dead' state. Jan 23 16:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ad5f72f0\x2df4c5\x2d4956\x2da682\x2dcd5deb7dbee6.mount has successfully entered the 'dead' state. Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.078304740Z" level=info msg="runSandbox: deleting pod ID d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844 from idIndex" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.078336073Z" level=info msg="runSandbox: removing pod sandbox d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.078352566Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.078365152Z" level=info msg="runSandbox: unmounting shmPath for sandbox d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.095461093Z" level=info msg="runSandbox: removing pod sandbox from storage: d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.098800596Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:22.098820603Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=1690ce8d-5588-4941-abee-76bd229138e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:22.099032 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:22.099080 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:22.099101 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:22.099161 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d20451cd5509b8b2df6efc4f0a763e6ef5cebe81a53610a965285ed1f7168844): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.029661451Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.029704907Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount has successfully entered the 'dead' state. Jan 23 16:39:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount has successfully entered the 'dead' state. Jan 23 16:39:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ec932ae4\x2d29bb\x2d48e5\x2da969\x2d4be11d00f7d6.mount has successfully entered the 'dead' state. Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.084287564Z" level=info msg="runSandbox: deleting pod ID 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb from idIndex" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.084314272Z" level=info msg="runSandbox: removing pod sandbox 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.084330161Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.084344063Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.100437217Z" level=info msg="runSandbox: removing pod sandbox from storage: 6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.103857996Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:23.103877344Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=340ba276-1f34-470e-88db-6e87146a29e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:23.104011 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:23.104170 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:23.104193 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:23.104246 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6e0ed5a6e7036ced4af1013120242363a72f6c83025410919c402469025eaefb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.030984294Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.031025672Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount has successfully entered the 'dead' state. Jan 23 16:39:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount has successfully entered the 'dead' state. Jan 23 16:39:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0683574e\x2d430a\x2d4493\x2db718\x2d85d715e1667f.mount has successfully entered the 'dead' state. Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.075307645Z" level=info msg="runSandbox: deleting pod ID cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c from idIndex" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.075331839Z" level=info msg="runSandbox: removing pod sandbox cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.075344278Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.075356234Z" level=info msg="runSandbox: unmounting shmPath for sandbox cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.092445047Z" level=info msg="runSandbox: removing pod sandbox from storage: cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.095740874Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:24.095760659Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=a611c09a-d961-459b-a5f6-5ef45aab3a43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:24.095968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:24.096014 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:24.096035 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:24.096081 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(cd3bd53a4ac73388afc2f5000b72c000302a963233f783bd14861299bf8dec8c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:24.996810 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:39:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:24.997374 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.031022558Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.031083032Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount has successfully entered the 'dead' state. Jan 23 16:39:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount has successfully entered the 'dead' state. Jan 23 16:39:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cca63a19\x2d41ad\x2d4e13\x2d9880\x2d40301b8d2df4.mount has successfully entered the 'dead' state. Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.073382236Z" level=info msg="runSandbox: deleting pod ID cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af from idIndex" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.073410361Z" level=info msg="runSandbox: removing pod sandbox cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.073426328Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.073439244Z" level=info msg="runSandbox: unmounting shmPath for sandbox cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.089465122Z" level=info msg="runSandbox: removing pod sandbox from storage: cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.093092587Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:25.093114545Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d0c935d3-0be1-435c-bea8-29b88d316591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:25.093317 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:25.093352 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:39:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:25.093376 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:39:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:25.093414 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(cb634b3e29087c93ea65baa518935a8da2ce6b3a0be07f864dbac420633706af): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866463 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866487 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866494 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866501 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866506 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866512 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.866521 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.996026 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.996409888Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.996459184Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.996753 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:27.996856 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.997024298Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.997068611Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.997139153Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:27.997170755Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.015011078Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/617f252f-1f16-45c9-80c7-2d1f878c5b2c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.015034202Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.015880843Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/3b6c8944-15a2-40c4-943f-8bc863dc7bd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.015898215Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.017341128Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/75643165-ed17-4449-b892-f2e14638c4ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.017360928Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:28.143361024Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.037007850Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.037038201Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount has successfully entered the 'dead' state. Jan 23 16:39:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount has successfully entered the 'dead' state. Jan 23 16:39:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e42c648c\x2d9eb6\x2d4283\x2d922c\x2deef97f9175eb.mount has successfully entered the 'dead' state. Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.083307471Z" level=info msg="runSandbox: deleting pod ID 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1 from idIndex" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.083331345Z" level=info msg="runSandbox: removing pod sandbox 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.083346718Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.083360376Z" level=info msg="runSandbox: unmounting shmPath for sandbox 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.103446680Z" level=info msg="runSandbox: removing pod sandbox from storage: 954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.106316701Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:29.106334530Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c2e19bbf-45f2-455f-939c-7b25843a316e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:29.106559 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:29.106608 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:29.106631 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:29.106681 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(954df7849e999122e2017cb90d58034b2e4f21e5c4484d58ef4cea4dd508bba1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:39:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:30.995706 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:30.995838 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:30.996684258Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:30.996932758Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:30.998835616Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:30.998926323Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:31.015888400Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ba51f7d9-8424-4c9c-a1c2-492b1ec4ff55 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:31.015909679Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:31.015890107Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/9eb1a95d-f780-49a0-95ad-dfa95132e2c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:31.016045795Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:32.995664 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:32.996060225Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:32.996113103Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:33.008014558Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9743955f-b694-4d16-b5a9-31afadd1ee3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:33.008036138Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:34.996244 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:39:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:34.996368 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:39:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:34.996574871Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:34.996616478Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:34.996697275Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:34.996741949Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.012038380Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/dd9510eb-909e-4459-ae93-759589edda37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.012063722Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.012715039Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cf0d28d9-7073-4bcd-a268-57b47b9e4246 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.012737947Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:35.996006 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.996369676Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:35.996403060Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:36.007016528Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/80805897-2f5b-4736-bbb3-a6f4fa7434cf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:36.007039881Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:36.995913 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:39:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:36.998776197Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:36.998809524Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:37.009145881Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/efc5ff22-e5e9-42b0-8b04-39749de20052 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:37.009168867Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:37.996763 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:39:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:37.997289 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1191] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1196] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1197] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1209] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1210] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1223] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1226] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1226] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1228] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1231] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491978.1235] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:39:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674491979.7242] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:39:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:41.995841 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:39:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:41.996362943Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:41.996406742Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:42.009972491Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/55f21c18-2487-4e89-bff5-14393b171c41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:42.009997021Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:50.997123 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:50.997625 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.811968563Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.812160577Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.812410795Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.812453993Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.812432005Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.812520905Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.813113218Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.813166378Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.815294105Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.815336473Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount has successfully entered the 'dead' state. Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853319883Z" level=info msg="runSandbox: deleting pod ID 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4 from idIndex" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853349727Z" level=info msg="runSandbox: removing pod sandbox 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853366783Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853384745Z" level=info msg="runSandbox: unmounting shmPath for sandbox 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853322808Z" level=info msg="runSandbox: deleting pod ID 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2 from idIndex" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853438578Z" level=info msg="runSandbox: removing pod sandbox 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853452581Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.853465849Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.854282958Z" level=info msg="runSandbox: deleting pod ID 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274 from idIndex" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.854308607Z" level=info msg="runSandbox: removing pod sandbox 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.854322574Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.854335289Z" level=info msg="runSandbox: unmounting shmPath for sandbox 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857290456Z" level=info msg="runSandbox: deleting pod ID f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6 from idIndex" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857319466Z" level=info msg="runSandbox: removing pod sandbox f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857331804Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857344398Z" level=info msg="runSandbox: unmounting shmPath for sandbox f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857293219Z" level=info msg="runSandbox: deleting pod ID 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308 from idIndex" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857399684Z" level=info msg="runSandbox: removing pod sandbox 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857413323Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.857425134Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.869482808Z" level=info msg="runSandbox: removing pod sandbox from storage: 71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.869496655Z" level=info msg="runSandbox: removing pod sandbox from storage: 60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.870505629Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.872422687Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.872442330Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f211ac47-487a-40b6-9f86-649fcd8de816 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.872860 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.872906 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.872932 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.872983 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.873521133Z" level=info msg="runSandbox: removing pod sandbox from storage: 1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.873530795Z" level=info msg="runSandbox: removing pod sandbox from storage: f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.875577628Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.875595245Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=64d15972-3230-425b-8047-47e007ffafe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.875861 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.875902 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.875926 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.875969 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.878590850Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.878607117Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=742a8f24-3648-43b2-adda-2c39caacb435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.878845 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.878878 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.878899 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.878937 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.881641062Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.881659586Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=87288977-9e41-436f-85b0-542e52b21e83 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.881879 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.881915 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.881937 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.881977 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.884616284Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.884633987Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e2ef56b8-5ac5-4601-bef7-4423f30954d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.884834 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.884866 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.884887 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:39:53.884924 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:53.928455 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:53.928532 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:53.928652 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:53.928780 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.928812281Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.928844005Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:39:53.928855 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.928923167Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.928950209Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929023552Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929049282Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929110223Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929135203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929154242Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.929170340Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.955620476Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/d2778388-9f59-467a-a16c-48dc72e429d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.955644350Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.956716369Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/89b9e670-ebac-4776-817e-0555cfd0b6f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.956738995Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.961013072Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2da85c41-f45e-4ba6-97ad-24624571ebc5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.961036101Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.962284631Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5fec3922-e5e0-46c9-997a-5f40f3f296a9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.962304734Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.965780280Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/9be6fc2d-ac06-44cd-a459-8031ea5d5a1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:39:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:53.965802810Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f0cb1d95\x2d0655\x2d45ec\x2db4e5\x2d6c2da18ddd04.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6b27f2c4\x2d861c\x2d45db\x2d9c5d\x2dd08453353a4b.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-05174d5d\x2db760\x2d4f2a\x2db97c\x2d4eea60d89938.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-282f70eb\x2d6be8\x2d4fb9\x2dab6e\x2ddd0572761f19.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e978088\x2d2000\x2d4eb0\x2db783\x2d9af6bc43ba71.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-71e12ac2d767192644ca0b6d21f3db1a2fde9c92d6dd24466ff0843e1abb19d4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f5a7bdb8a0486569e6c13ec148efbe2f61240052ca70a14f1abcdd483f7242e6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7b2a2db87082d81e1d26c68bbdfcc4f0ab5adb970f2e6ee22007e792214a79d2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1bb9cdeac3f4971d8a7a3735bdb9aa85fe2b4dadca9a5ac32a9e2f31178ef308-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-60c8c13d4e42539843307f87ee51e2a62579659cfd7e49d3275f6b7f24b64274-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:39:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:39:58.143466644Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:40:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:03.022185461Z" level=info msg="NetworkStart: stopping network for sandbox 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:03.022355819Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/03d12329-66a6-4f65-8c7e-862aa95bee27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:03.022379441Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:03.022387209Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:03.022394467Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:05.996259 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:40:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:05.996942 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.027994166Z" level=info msg="NetworkStart: stopping network for sandbox 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028348902Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/617f252f-1f16-45c9-80c7-2d1f878c5b2c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028372743Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028379479Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028385757Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028490281Z" level=info msg="NetworkStart: stopping network for sandbox 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028609951Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/3b6c8944-15a2-40c4-943f-8bc863dc7bd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028630805Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028637291Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.028644443Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.030360340Z" level=info msg="NetworkStart: stopping network for sandbox f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.030499498Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/75643165-ed17-4449-b892-f2e14638c4ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.030526161Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.030533583Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:13.030539934Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.028656905Z" level=info msg="NetworkStart: stopping network for sandbox dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.028867513Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ba51f7d9-8424-4c9c-a1c2-492b1ec4ff55 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.028891023Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.028897619Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.028904423Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.029060521Z" level=info msg="NetworkStart: stopping network for sandbox 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.029216081Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/9eb1a95d-f780-49a0-95ad-dfa95132e2c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.029243936Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.029251295Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:16.029257785Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:16.996426 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:40:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:16.996914 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:18.021880696Z" level=info msg="NetworkStart: stopping network for sandbox ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:18.022072134Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9743955f-b694-4d16-b5a9-31afadd1ee3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:18.022097275Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:18.022103855Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:18.022110950Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.024855606Z" level=info msg="NetworkStart: stopping network for sandbox a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.024996213Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/dd9510eb-909e-4459-ae93-759589edda37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.025016525Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.025023483Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.025029981Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.026138441Z" level=info msg="NetworkStart: stopping network for sandbox 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.026254019Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cf0d28d9-7073-4bcd-a268-57b47b9e4246 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.026276534Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.026283679Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:20.026289436Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:21.019276357Z" level=info msg="NetworkStart: stopping network for sandbox da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:21.019418755Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/80805897-2f5b-4736-bbb3-a6f4fa7434cf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:21.019439195Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:21.019445955Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:21.019451958Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:22.021297525Z" level=info msg="NetworkStart: stopping network for sandbox 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:22.021462230Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/efc5ff22-e5e9-42b0-8b04-39749de20052 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:22.021488784Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:22.021496659Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:22.021503038Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.023357478Z" level=info msg="NetworkStart: stopping network for sandbox 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.023508089Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/55f21c18-2487-4e89-bff5-14393b171c41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.023531288Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.023538774Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.023545177Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.866980 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867120 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867126 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867135 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867140 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867147 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:27.867152 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.873486370Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=ced9083f-b225-4313-8375-b7fbf4572e6f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:27.873615545Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ced9083f-b225-4313-8375-b7fbf4572e6f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:28.142537673Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:40:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:29.996942 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:40:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:29.997491 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.970435764Z" level=info msg="NetworkStart: stopping network for sandbox e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.970813838Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/d2778388-9f59-467a-a16c-48dc72e429d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.970838108Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.970844326Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.970850714Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.972047253Z" level=info msg="NetworkStart: stopping network for sandbox 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.972195453Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/89b9e670-ebac-4776-817e-0555cfd0b6f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.972243469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.972253153Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.972260614Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.973853096Z" level=info msg="NetworkStart: stopping network for sandbox 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.973963359Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2da85c41-f45e-4ba6-97ad-24624571ebc5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.973986428Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.973994904Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.974002402Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.975737291Z" level=info msg="NetworkStart: stopping network for sandbox fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.975846633Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5fec3922-e5e0-46c9-997a-5f40f3f296a9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.975867596Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.975875855Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.975882923Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.978122355Z" level=info msg="NetworkStart: stopping network for sandbox ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.978245467Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/9be6fc2d-ac06-44cd-a459-8031ea5d5a1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.978273279Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.978281690Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:40:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:38.978289562Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:40:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:41.997156 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:40:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:41.997717 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.035977052Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.036036712Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount has successfully entered the 'dead' state. Jan 23 16:40:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount has successfully entered the 'dead' state. Jan 23 16:40:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-03d12329\x2d66a6\x2d4f65\x2d8c7e\x2d862aa95bee27.mount has successfully entered the 'dead' state. Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.090357799Z" level=info msg="runSandbox: deleting pod ID 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7 from idIndex" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.090388399Z" level=info msg="runSandbox: removing pod sandbox 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.090404376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.090418996Z" level=info msg="runSandbox: unmounting shmPath for sandbox 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.110480415Z" level=info msg="runSandbox: removing pod sandbox from storage: 584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.113881155Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:48.113902366Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a09f9d3d-7cd5-4ca8-8937-16d7d45c584b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:48.114179 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:40:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:48.114240 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:40:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:48.114268 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:40:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:48.114334 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(584302d3f28c1b490611035fc6c68e5cad34607589f12ded5b5400cc43a372f7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:40:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:40:55.996213 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:40:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:55.996873 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.039932809Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.039973664Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.039979994Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.040015858Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.042404176Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.042442374Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3b6c8944\x2d15a2\x2d40c4\x2d943f\x2d8bc863dc7bd3.mount has successfully entered the 'dead' state. Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094362054Z" level=info msg="runSandbox: deleting pod ID 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440 from idIndex" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094389327Z" level=info msg="runSandbox: removing pod sandbox 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094402757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094424236Z" level=info msg="runSandbox: unmounting shmPath for sandbox 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094363509Z" level=info msg="runSandbox: deleting pod ID 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363 from idIndex" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094474954Z" level=info msg="runSandbox: removing pod sandbox 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094488882Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.094501423Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.102282890Z" level=info msg="runSandbox: deleting pod ID f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb from idIndex" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.102313446Z" level=info msg="runSandbox: removing pod sandbox f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.102334312Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.102349747Z" level=info msg="runSandbox: unmounting shmPath for sandbox f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.114470918Z" level=info msg="runSandbox: removing pod sandbox from storage: 7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.115435234Z" level=info msg="runSandbox: removing pod sandbox from storage: 881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.117954991Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.117976574Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=77e09270-fc87-477c-aaea-71c4d0f60ad9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.118208 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.118254 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.118277 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.118325 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.118442338Z" level=info msg="runSandbox: removing pod sandbox from storage: f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.121072576Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.121088979Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=31c0422a-5941-4507-a0b8-9c9a498698fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.121304 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.121336 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.121365 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.121400 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.124099252Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.124116085Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c95fa61f-1b8a-4043-89ec-f641ab00b4d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.124254 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.124284 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.124306 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:40:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:40:58.124344 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:40:58.147994021Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:40:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-75643165\x2ded17\x2d4449\x2db892\x2df2e14638c4ca.mount has successfully entered the 'dead' state. Jan 23 16:40:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-617f252f\x2d1f16\x2d45c9\x2d80c7\x2d2d1f878c5b2c.mount has successfully entered the 'dead' state. Jan 23 16:40:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f457b74097e889613639ce406ada529083fcc75aef434a5a0b0314f612db03eb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:40:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7c1c217e18d697f74daa8e314b27e1145cf60bd8a951e9b27715eb664c0a4363-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:40:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-881c8e891491be2f9e7d76e90f9b4b4db0835832628d55f406e7b34496a79440-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.039687855Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.039736086Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.040584565Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.040627580Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.091320924Z" level=info msg="runSandbox: deleting pod ID dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5 from idIndex" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.091346111Z" level=info msg="runSandbox: removing pod sandbox dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.091359379Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.091370607Z" level=info msg="runSandbox: unmounting shmPath for sandbox dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.092305449Z" level=info msg="runSandbox: deleting pod ID 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca from idIndex" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.092332904Z" level=info msg="runSandbox: removing pod sandbox 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.092349648Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.092365142Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.107455219Z" level=info msg="runSandbox: removing pod sandbox from storage: dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.107477530Z" level=info msg="runSandbox: removing pod sandbox from storage: 7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.111071931Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.111090467Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=742e2482-e076-4bab-814d-31071d6b0b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.111307 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.111350 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.111376 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.111427 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.114090174Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.114108281Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b8c35d67-6cab-48a3-bc98-bdc3bb59a9bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.114316 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.114352 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.114373 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:01.114413 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ba51f7d9\x2d8424\x2d4c9c\x2da1c2\x2d492b1ec4ff55.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9eb1a95d\x2df780\x2d49a0\x2d95ad\x2ddfa95132e2c1.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dad35ffb58279f4922e99888547d17fa88fe63c3b303e88aa861a8b2991bc3d5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7ae229292572af64a4d1788f04bfbf4b01c19f69e3030e6293b7264c245eb3ca-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:01.996410 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.996800526Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:01.996844955Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:02.009405328Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d2b3d624-7a11-4cf6-b050-f34d80a9d527 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:02.009428255Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.032652936Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.032896610Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount has successfully entered the 'dead' state. Jan 23 16:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount has successfully entered the 'dead' state. Jan 23 16:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9743955f\x2db694\x2d4d16\x2db5a9\x2d31afadd1ee3b.mount has successfully entered the 'dead' state. Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.081315947Z" level=info msg="runSandbox: deleting pod ID ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def from idIndex" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.081342539Z" level=info msg="runSandbox: removing pod sandbox ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.081355969Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.081367931Z" level=info msg="runSandbox: unmounting shmPath for sandbox ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.101444041Z" level=info msg="runSandbox: removing pod sandbox from storage: ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.104518065Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:03.104536926Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=bbb56d2a-b2cb-4888-a130-9d5f0d06e587 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:03.104768 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:03.104812 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:03.104836 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:03.104888 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ed9e1bc843de639c0c2540c379aab91a3f3399bd1b3596e12ae7f5dcc7d25def): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.035741424Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.035780051Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.037045251Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.037075620Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount has successfully entered the 'dead' state. Jan 23 16:41:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount has successfully entered the 'dead' state. Jan 23 16:41:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount has successfully entered the 'dead' state. Jan 23 16:41:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount has successfully entered the 'dead' state. Jan 23 16:41:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dd9510eb\x2d909e\x2d4459\x2dae93\x2d759589edda37.mount has successfully entered the 'dead' state. Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.088312102Z" level=info msg="runSandbox: deleting pod ID a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7 from idIndex" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.088335593Z" level=info msg="runSandbox: removing pod sandbox a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.088348434Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.088360143Z" level=info msg="runSandbox: unmounting shmPath for sandbox a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.092305232Z" level=info msg="runSandbox: deleting pod ID 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c from idIndex" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.092328709Z" level=info msg="runSandbox: removing pod sandbox 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.092341723Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.092356787Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.100452319Z" level=info msg="runSandbox: removing pod sandbox from storage: a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.103927330Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.103945096Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=5da2b080-8676-48e7-a791-2bca39b53b45 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.104159 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.104211 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.104233 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.104283 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.104450715Z" level=info msg="runSandbox: removing pod sandbox from storage: 7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.107706620Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:05.107725252Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4beb6686-d50b-4762-aa31-8f118b463d79 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.107917 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.107958 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.107980 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:41:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:05.108024 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.030953202Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.030982127Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cf0d28d9\x2d7073\x2d4bcd\x2da268\x2d57b47b9e4246.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7da838c6cb1d59ed1bc41f9e355cb0fd789fa411cee4e70c19c62569eb50894c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a5ace8a7129f8913353acbf74d69e7b846b0565e9e03fae8a9443bb824f98ac7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-80805897\x2d2f5b\x2d4736\x2dbbb3\x2da6f4fa7434cf.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.075304239Z" level=info msg="runSandbox: deleting pod ID da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1 from idIndex" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.075333218Z" level=info msg="runSandbox: removing pod sandbox da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.075346615Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.075357582Z" level=info msg="runSandbox: unmounting shmPath for sandbox da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.091443853Z" level=info msg="runSandbox: removing pod sandbox from storage: da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.094795484Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:06.094814651Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=692583f6-19d2-4b8d-8694-b4dc91a6733d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:06.095013 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:06.095056 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:41:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:06.095079 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:41:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:06.095122 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(da156bdaf474eb17b80a6a433a29018541b0be01571a5d1bc6e61f70f3d69da1): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.033227843Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.033270729Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount has successfully entered the 'dead' state. Jan 23 16:41:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount has successfully entered the 'dead' state. Jan 23 16:41:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-efc5ff22\x2de5e9\x2d42b0\x2d8b04\x2d39749de20052.mount has successfully entered the 'dead' state. Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.083309844Z" level=info msg="runSandbox: deleting pod ID 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb from idIndex" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.083338250Z" level=info msg="runSandbox: removing pod sandbox 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.083356898Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.083370854Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.103438404Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.106793661Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:07.106813352Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=feed13ee-5c17-4993-b2bc-e015dc092da7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:07.107013 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:07.107058 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:41:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:07.107093 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:41:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:07.107148 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0b53b32dd977eb2b4e305390817358e74d7d3115c8c4ffa2a46818718285f3eb): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:41:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492068.1191] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:41:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492068.1196] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:41:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492068.1197] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:41:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492068.1492] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:41:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492068.1493] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:41:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:08.995697 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:41:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:08.996041290Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:08.996079925Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:09.007871568Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/22e61ff9-7ad7-4c35-87a8-21d4338e6406 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:09.007897834Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:09.996203 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:09.996543437Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:09.996577955Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:09.997009 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:09.997554 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:41:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:10.011078381Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e296172-aed6-4604-b37f-d7b5a6d43307 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:10.011103998Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:10 hub-master-0.workload.bos2.lab systemd[1]: run-runc-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172-runc.6qw7wE.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-runc-eef0a0bff2bcb3236192cf4fbb5614e0950f3f71d60b4b296d8622ed37b81172-runc.6qw7wE.mount has successfully entered the 'dead' state. Jan 23 16:41:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:10.996270 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:41:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:10.996698445Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:10.996736941Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:11.007339453Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2c126742-c159-48e2-ab60-d361b29c1476 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:11.007360197Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:11.996087 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:11.996449153Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:11.996487918Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.007755719Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/07e62555-26c7-474a-a809-a0be0e097b2b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.007777499Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.034048268Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.034078472Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount has successfully entered the 'dead' state. Jan 23 16:41:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount has successfully entered the 'dead' state. Jan 23 16:41:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-55f21c18\x2d2487\x2d4e89\x2dbff5\x2d14393b171c41.mount has successfully entered the 'dead' state. Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.082307649Z" level=info msg="runSandbox: deleting pod ID 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93 from idIndex" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.082335125Z" level=info msg="runSandbox: removing pod sandbox 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.082348362Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.082361619Z" level=info msg="runSandbox: unmounting shmPath for sandbox 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.098433197Z" level=info msg="runSandbox: removing pod sandbox from storage: 732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.101248626Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:12.101266873Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=5976f115-4109-4c72-bcd9-9239898ac3a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:12.101474 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:12.101515 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:41:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:12.101538 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:41:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:12.101587 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(732a2a29a486d1e618979747f956c488b5921adcc1693196c43496325a3f7c93): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:41:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:13.996234 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:13.996610239Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:13.996650459Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:14.008041742Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/3c851f41-7cf9-434c-ad15-c9d3d3f2a655 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:14.008061725Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:15.996394 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:15.996720872Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:15.996760605Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:16.007797240Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/99d7c892-f680-45d9-9ceb-cc60b22be9c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:16.007818638Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:17.995890 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:17.996387046Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:17.996439017Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:18.007533433Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/d12d0df6-e9ca-4feb-bb0f-7770fc95a5b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:18.007556489Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:18.995930 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:18.996283617Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:18.996321356Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:19.006721121Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2e562cb8-d5ed-4ba1-ae3a-76026f5c565e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:19.006744272Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:19.995640 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:41:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:19.995932413Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:19.995968544Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:20.007567221Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/1465c067-88a4-4303-b939-01a3b9744695 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:20.007585215Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:21.995831 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:41:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:21.996184426Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:21.996226748Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:22.010862305Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/f4076dc9-4328-4509-a9fc-54b1294221cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:22.010886856Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983334397Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983378734Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983796356Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983841288Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983856834Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.983888106Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.986595973Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.986627132Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.989533432Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:23.989564027Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:23.997069 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:41:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:23.997617 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount has successfully entered the 'dead' state. Jan 23 16:41:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035394085Z" level=info msg="runSandbox: deleting pod ID fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd from idIndex" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035415643Z" level=info msg="runSandbox: deleting pod ID ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638 from idIndex" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035449939Z" level=info msg="runSandbox: removing pod sandbox ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035465970Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035488174Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035425131Z" level=info msg="runSandbox: removing pod sandbox fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035543958Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035559798Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035397102Z" level=info msg="runSandbox: deleting pod ID 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08 from idIndex" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035636511Z" level=info msg="runSandbox: removing pod sandbox 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035397232Z" level=info msg="runSandbox: deleting pod ID 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437 from idIndex" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035671690Z" level=info msg="runSandbox: removing pod sandbox 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035684281Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035695850Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035653116Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.035772355Z" level=info msg="runSandbox: unmounting shmPath for sandbox 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.036281500Z" level=info msg="runSandbox: deleting pod ID e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1 from idIndex" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.036307117Z" level=info msg="runSandbox: removing pod sandbox e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.036321436Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.036349520Z" level=info msg="runSandbox: unmounting shmPath for sandbox e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.047450494Z" level=info msg="runSandbox: removing pod sandbox from storage: 892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.048523476Z" level=info msg="runSandbox: removing pod sandbox from storage: ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.050451082Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.050470679Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1ad6cc54-97fe-4f3f-9f97-384c18253685 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.050704 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.050742 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.050763 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.050804 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.051455489Z" level=info msg="runSandbox: removing pod sandbox from storage: e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.051574539Z" level=info msg="runSandbox: removing pod sandbox from storage: fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.051642266Z" level=info msg="runSandbox: removing pod sandbox from storage: 7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.053823734Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.053844689Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=05cc36a4-736d-42eb-8f43-2fbfd4efc7f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.054136 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.054168 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.054189 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.054233 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.056928508Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.056949550Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=00d507ee-cbd4-4a41-ba2a-169b59287081 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.057229 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.057270 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.057290 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.057330 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.060035060Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.060052919Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=6f96911a-f7f9-4938-9aab-993fe890d03d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.060296 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.060326 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.060346 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.060382 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.063811998Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.063833033Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=7f4d9274-3ce0-48c2-8bea-5e2950a122c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.064030 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.064065 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.064086 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:24.064125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:24.087296 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:24.087444 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:24.087559 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087607021Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087643966Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:24.087658 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087675301Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087706523Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:24.087739 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087962354Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.087992740Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.088006198Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.088025372Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.088048565Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.088078370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.114112684Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d844d8cd-3957-4bbf-a50f-1d80537890ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.114144035Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.115578589Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/53a8d099-bb08-4310-85c5-6a6d04de44fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.115604028Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.120028038Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/65e1f5a3-5561-4ab0-95ba-e19b1a3d2b18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.120052627Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.124424085Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/a864aef9-f88c-4118-81f6-3193e1ad9f87 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.124442786Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.125135868Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/41fbb591-900a-4c46-9561-af7ae93f8a56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:24.125157209Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9be6fc2d\x2dac06\x2d44cd\x2da459\x2d8031ea5d5a1b.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5fec3922\x2de5e0\x2d46c9\x2d997a\x2d5f40f3f296a9.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2da85c41\x2df45e\x2d4ba6\x2d97ad\x2d24624571ebc5.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ae85c72c357b8402c087fd5624be49d5f7ba7cb147d2984861d58a64ec968638-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fb99aa9a70eb28f4d80b0096c2d68f14dcc3fe2bf2bf229bf818c58433b760dd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-89b9e670\x2debac\x2d4776\x2d817e\x2d0555cfd0b6f5.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d2778388\x2d9f59\x2d467a\x2da16c\x2d48dc72e429d9.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7e527db83a547cf3732ace64f0121c80b3c1408c4ad93867df68d9ea796b7437-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-892accc04983fca24d7bfb3205d866262daa6ab4dc7b5630bf708d3db438ec08-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e0f219b0b28b65a676fbce4b50f44f45da65c1332148021d98f5a1031b4727d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:41:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:26.995745 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:41:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:26.996488594Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:26.996544759Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:41:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:27.007461891Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bb2f76aa-b2fa-4e39-ab0a-fc470ba55cb3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:27.007488633Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867889 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867909 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867916 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867923 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867929 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867934 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:27.867941 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:28.143687258Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:41:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:37.997875 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:41:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:37.998739 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:47.023345130Z" level=info msg="NetworkStart: stopping network for sandbox ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:47.023488693Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d2b3d624-7a11-4cf6-b050-f34d80a9d527 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:47.023511149Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:47.023517547Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:47.023523842Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:41:48.997002 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:41:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:41:48.997509 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:54.022528011Z" level=info msg="NetworkStart: stopping network for sandbox 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:54.022683435Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/22e61ff9-7ad7-4c35-87a8-21d4338e6406 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:54.022709560Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:54.022719007Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:54.022725189Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:55.024337346Z" level=info msg="NetworkStart: stopping network for sandbox 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:55.024505570Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e296172-aed6-4604-b37f-d7b5a6d43307 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:55.024534224Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:55.024541575Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:55.024548875Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:56.020508562Z" level=info msg="NetworkStart: stopping network for sandbox 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:56.020659833Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2c126742-c159-48e2-ab60-d361b29c1476 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:56.020683379Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:56.020690637Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:56.020697029Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:57.020358960Z" level=info msg="NetworkStart: stopping network for sandbox 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:57.020502011Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/07e62555-26c7-474a-a809-a0be0e097b2b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:57.020525137Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:57.020532026Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:57.020538167Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:58.143178422Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:41:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:59.022470530Z" level=info msg="NetworkStart: stopping network for sandbox ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:41:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:59.022609965Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/3c851f41-7cf9-434c-ad15-c9d3d3f2a655 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:41:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:59.022635150Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:41:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:59.022642971Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:41:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:41:59.022649774Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:01.022382590Z" level=info msg="NetworkStart: stopping network for sandbox bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:01.022544813Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/99d7c892-f680-45d9-9ceb-cc60b22be9c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:01.022568193Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:01.022575791Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:01.022582592Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:02.996336 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:42:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:02.997031 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:03.021649255Z" level=info msg="NetworkStart: stopping network for sandbox 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:03.021811882Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/d12d0df6-e9ca-4feb-bb0f-7770fc95a5b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:03.021838211Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:03.021845388Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:03.021851293Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:04.019831993Z" level=info msg="NetworkStart: stopping network for sandbox e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:04.019998345Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2e562cb8-d5ed-4ba1-ae3a-76026f5c565e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:04.020023013Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:04.020030998Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:04.020037935Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:05.020582760Z" level=info msg="NetworkStart: stopping network for sandbox 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:05.020734830Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/1465c067-88a4-4303-b939-01a3b9744695 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:05.020757098Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:05.020764155Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:05.020771971Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:07.023739306Z" level=info msg="NetworkStart: stopping network for sandbox 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:07.023889851Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/f4076dc9-4328-4509-a9fc-54b1294221cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:07.023915771Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:07.023922496Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:07.023929384Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.128038987Z" level=info msg="NetworkStart: stopping network for sandbox a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.128182426Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d844d8cd-3957-4bbf-a50f-1d80537890ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.128203794Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.128218092Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.128224240Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.131035653Z" level=info msg="NetworkStart: stopping network for sandbox ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.131181545Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/53a8d099-bb08-4310-85c5-6a6d04de44fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.131216200Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.131223580Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.131230848Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.132295287Z" level=info msg="NetworkStart: stopping network for sandbox d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.132411433Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/65e1f5a3-5561-4ab0-95ba-e19b1a3d2b18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.132432529Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.132439204Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.132444981Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138030956Z" level=info msg="NetworkStart: stopping network for sandbox d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138157372Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/a864aef9-f88c-4118-81f6-3193e1ad9f87 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138182319Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138190842Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138197635Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138604868Z" level=info msg="NetworkStart: stopping network for sandbox 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138750265Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/41fbb591-900a-4c46-9561-af7ae93f8a56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138772981Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138779933Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:09.138786463Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:12.021187642Z" level=info msg="NetworkStart: stopping network for sandbox 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:12.021559879Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bb2f76aa-b2fa-4e39-ab0a-fc470ba55cb3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:12.021582776Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:42:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:12.021589220Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:42:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:12.021595636Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:13.997042 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.997780855Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=b2f91229-647c-4869-92b8-39bb30ba0ae8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.997936179Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b2f91229-647c-4869-92b8-39bb30ba0ae8 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.998566028Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=906dc29b-f67e-423d-a077-d086fede5929 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.998665975Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=906dc29b-f67e-423d-a077-d086fede5929 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.999453265Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=0f851c7d-5775-4d88-8b16-090f0f366b17 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:13.999526775Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope. -- Subject: Unit crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335. -- Subject: Unit crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.117948423Z" level=info msg="Created container 32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=0f851c7d-5775-4d88-8b16-090f0f366b17 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.118442044Z" level=info msg="Starting container: 32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" id=230e28a8-9d81-4915-a990-77ff4a5d6b01 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.125569390Z" level=info msg="Started container" PID=62989 containerID=32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=230e28a8-9d81-4915-a990-77ff4a5d6b01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.130524352Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.140941879Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.140962305Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.140973448Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.149673819Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.149693363Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.149704043Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.158624598Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.158642202Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.158652556Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.166728366Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.166744380Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.166753610Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.175243177Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:14.175262088Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:42:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:14.178274 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/183.log" Jan 23 16:42:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:14.179108 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335} Jan 23 16:42:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:14.179307 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:42:14 hub-master-0.workload.bos2.lab conmon[62977]: conmon 32e4f1a74aa7c06d7dc2 : container 62989 exited with status 1 Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has successfully entered the 'dead' state. Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope: Consumed 563ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope completed and consumed the indicated resources. Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope has successfully entered the 'dead' state. Jan 23 16:42:14 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope: Consumed 49ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335.scope completed and consumed the indicated resources. Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.182356 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/184.log" Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.182848 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/183.log" Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.183932 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" exitCode=1 Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.183952 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335} Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.183971 8631 scope.go:115] "RemoveContainer" containerID="4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" Jan 23 16:42:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:15.184453723Z" level=info msg="Removing container: 4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc" id=b6f22c1f-d93b-43d7-aba1-db4cac364605 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:15.184765 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:42:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:15.185257 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:15 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-8d576d89aaf1995787553a8db616786e1f132264e0781b17cd9eb91b8a790c3f-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-8d576d89aaf1995787553a8db616786e1f132264e0781b17cd9eb91b8a790c3f-merged.mount has successfully entered the 'dead' state. Jan 23 16:42:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:15.229343009Z" level=info msg="Removed container 4f80c9a9daeb183a4de1cd16e52dc28495ff78b15f6d437cfa0a059b581b9adc: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=b6f22c1f-d93b-43d7-aba1-db4cac364605 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:42:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:16.186934 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/184.log" Jan 23 16:42:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:16.188979 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:42:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:16.189474 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868059 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868257 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868267 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868274 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868282 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868288 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:27.868296 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:42:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:28.143661891Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:42:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:31.996410 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:42:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:31.996990 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.036279501Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.036524252Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount has successfully entered the 'dead' state. Jan 23 16:42:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount has successfully entered the 'dead' state. Jan 23 16:42:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d2b3d624\x2d7a11\x2d4cf6\x2db050\x2df34d80a9d527.mount has successfully entered the 'dead' state. Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.100321710Z" level=info msg="runSandbox: deleting pod ID ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba from idIndex" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.100344682Z" level=info msg="runSandbox: removing pod sandbox ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.100362203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.100380796Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.117407094Z" level=info msg="runSandbox: removing pod sandbox from storage: ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.120910254Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:32.120927044Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=69fa9497-3043-4f27-a753-c959fcf8bc65 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:32.121132 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:32.121174 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:32.121196 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:32.121245 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ae9fff3b7214ef25c589a62a1781c921c3ae4f43138f5e55b0a24e430e2920ba): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.033977079Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.034024187Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount has successfully entered the 'dead' state. Jan 23 16:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount has successfully entered the 'dead' state. Jan 23 16:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-22e61ff9\x2d7ad7\x2d4c35\x2d87a8\x2d21d4338e6406.mount has successfully entered the 'dead' state. Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.074315627Z" level=info msg="runSandbox: deleting pod ID 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180 from idIndex" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.074345723Z" level=info msg="runSandbox: removing pod sandbox 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.074362621Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.074379367Z" level=info msg="runSandbox: unmounting shmPath for sandbox 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.098485415Z" level=info msg="runSandbox: removing pod sandbox from storage: 446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.102112790Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:39.102132390Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c5225a36-e156-484b-a9a8-0477076d817a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:39.102389 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:39.102433 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:42:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:39.102454 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:42:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:39.102495 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(446a059f90375f30a2715462dee8670e44ce5711520c4ce76282e12595a7c180): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.035979295Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.036018894Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount has successfully entered the 'dead' state. Jan 23 16:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount has successfully entered the 'dead' state. Jan 23 16:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1e296172\x2daed6\x2d4604\x2db37f\x2dd7b5a6d43307.mount has successfully entered the 'dead' state. Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.075319519Z" level=info msg="runSandbox: deleting pod ID 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8 from idIndex" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.075347493Z" level=info msg="runSandbox: removing pod sandbox 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.075365843Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.075379437Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.095457638Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.098808226Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:40.098835505Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f23c5f59-24d0-43ca-b39f-595326a61cc2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:40.098970 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:40.099013 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:40.099036 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:40.099083 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(7b451040e2226771c19abdbc9cdd0ff1209b77b67956f78c49915dd0bfb41fe8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.032309773Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.032355662Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount has successfully entered the 'dead' state. Jan 23 16:42:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount has successfully entered the 'dead' state. Jan 23 16:42:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2c126742\x2dc159\x2d48e2\x2dab60\x2dd361b29c1476.mount has successfully entered the 'dead' state. Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.077287345Z" level=info msg="runSandbox: deleting pod ID 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990 from idIndex" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.077316764Z" level=info msg="runSandbox: removing pod sandbox 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.077332178Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.077345758Z" level=info msg="runSandbox: unmounting shmPath for sandbox 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.097519604Z" level=info msg="runSandbox: removing pod sandbox from storage: 13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.100935438Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:41.100953825Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=64c2c018-5575-4bad-b422-5043c45e116e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:41.101121 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:41.101166 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:42:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:41.101192 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:42:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:41.101253 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(13e2b1db610207951bd8cd948db625e89728068ac5a23cc14a265105912f5990): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.031594725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.031641368Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount has successfully entered the 'dead' state. Jan 23 16:42:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount has successfully entered the 'dead' state. Jan 23 16:42:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-07e62555\x2d26c7\x2d474a\x2da809\x2da0be0e097b2b.mount has successfully entered the 'dead' state. Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.070290876Z" level=info msg="runSandbox: deleting pod ID 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b from idIndex" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.070320355Z" level=info msg="runSandbox: removing pod sandbox 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.070335805Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.070349454Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.087449456Z" level=info msg="runSandbox: removing pod sandbox from storage: 3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.090905167Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.090925619Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=bc4f8e51-7cef-41ed-b678-bd2636de7f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:42.091045 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:42.091089 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:42.091112 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:42.091160 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(3933ccf003da3b7d9234fce258d5843168c52492104f1047e32f7a28970cc51b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:42.995838 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.996194637Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:42.996249188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:43.008944713Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/98e02098-d905-4ce3-915a-ed6c49b9d2ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:43.008966769Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:43.996525 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:43.997169 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.034049412Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.034090214Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount has successfully entered the 'dead' state. Jan 23 16:42:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount has successfully entered the 'dead' state. Jan 23 16:42:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c851f41\x2d7cf9\x2d434c\x2dad15\x2dc9d3d3f2a655.mount has successfully entered the 'dead' state. Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.073361145Z" level=info msg="runSandbox: deleting pod ID ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616 from idIndex" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.073390785Z" level=info msg="runSandbox: removing pod sandbox ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.073415253Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.073428748Z" level=info msg="runSandbox: unmounting shmPath for sandbox ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.088505414Z" level=info msg="runSandbox: removing pod sandbox from storage: ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.091794179Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:44.091815716Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9d526d73-09e5-4b27-aeb3-963e6e8e7c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:44.092173 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:44.092223 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:42:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:44.092246 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:42:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:44.092296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ebad5ffcebb62f7e5df78b081a3a90d97350cdc9694ace298b942169c0cc6616): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.032844816Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.032884410Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount has successfully entered the 'dead' state. Jan 23 16:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount has successfully entered the 'dead' state. Jan 23 16:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-99d7c892\x2df680\x2d45d9\x2d9ceb\x2dcc60b22be9c7.mount has successfully entered the 'dead' state. Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.079309496Z" level=info msg="runSandbox: deleting pod ID bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a from idIndex" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.079336098Z" level=info msg="runSandbox: removing pod sandbox bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.079351011Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.079363079Z" level=info msg="runSandbox: unmounting shmPath for sandbox bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.093469491Z" level=info msg="runSandbox: removing pod sandbox from storage: bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.097012749Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:46.097030689Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=7b321551-9e37-432a-99fa-1100503a043d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:46.097254 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:46.097298 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:46.097321 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:46.097370 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3f132d2f10b1eccfc8af090a4b4644be982e1acddc7fa83b58680d1fc9c47a): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.032876299Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.032909144Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount has successfully entered the 'dead' state. Jan 23 16:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount has successfully entered the 'dead' state. Jan 23 16:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d12d0df6\x2de9ca\x2d4feb\x2dbb0f\x2d7770fc95a5b5.mount has successfully entered the 'dead' state. Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.067308843Z" level=info msg="runSandbox: deleting pod ID 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf from idIndex" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.067331322Z" level=info msg="runSandbox: removing pod sandbox 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.067344358Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.067356636Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.085441841Z" level=info msg="runSandbox: removing pod sandbox from storage: 5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.088921919Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:48.088940209Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4619834b-f013-475e-b982-60ad3079b2f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:48.089142 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:48.089185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:42:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:48.089214 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:42:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:48.089261 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(5c1f19869d8683e9d49ec25916365f6406e04ca6079d0166b64ac3b4fdb8f9bf): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.030477263Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.030517446Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount has successfully entered the 'dead' state. Jan 23 16:42:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount has successfully entered the 'dead' state. Jan 23 16:42:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2e562cb8\x2dd5ed\x2d4ba1\x2dae3a\x2d76026f5c565e.mount has successfully entered the 'dead' state. Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.067375064Z" level=info msg="runSandbox: deleting pod ID e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611 from idIndex" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.067407020Z" level=info msg="runSandbox: removing pod sandbox e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.067426491Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.067441160Z" level=info msg="runSandbox: unmounting shmPath for sandbox e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.079427539Z" level=info msg="runSandbox: removing pod sandbox from storage: e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.086310252Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:49.086335860Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7151066e-5ca3-4038-9838-e1891d8d2389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:49.086596 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:49.086639 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:42:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:49.086660 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:42:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:49.086705 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e642b8a52ee7131263e86c26e77c9dec46152d916a3ee89c5e121f2f1da5e611): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.032548649Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.032584278Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount has successfully entered the 'dead' state. Jan 23 16:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount has successfully entered the 'dead' state. Jan 23 16:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1465c067\x2d88a4\x2d4303\x2db939\x2d01a3b9744695.mount has successfully entered the 'dead' state. Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.076309260Z" level=info msg="runSandbox: deleting pod ID 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a from idIndex" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.076335682Z" level=info msg="runSandbox: removing pod sandbox 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.076348986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.076359611Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.096435144Z" level=info msg="runSandbox: removing pod sandbox from storage: 3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.099753434Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:50.099770881Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=29ac8f8f-1219-4123-9b9c-6f5b35abf001 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:50.099947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:50.099987 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:50.100011 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:50.100055 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3454437479c4d4bf31560d66e6fe80287faad89358972530b60efec108b9089a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:51.996082 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:51.996433626Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:51.996477950Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.008854482Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/b6b8bc33-d8ff-4197-add2-f858d59ffb47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.008875385Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.035492706Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.035526745Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount has successfully entered the 'dead' state. Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.073277140Z" level=info msg="runSandbox: deleting pod ID 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775 from idIndex" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.073301524Z" level=info msg="runSandbox: removing pod sandbox 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.073316113Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.073326796Z" level=info msg="runSandbox: unmounting shmPath for sandbox 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.086435930Z" level=info msg="runSandbox: removing pod sandbox from storage: 148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.089117589Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.089135417Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=86f317dc-588e-4ab5-9975-04ad29bada2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:52.089344 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:52.089384 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:52.089406 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:52.089450 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:42:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount has successfully entered the 'dead' state. Jan 23 16:42:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f4076dc9\x2d4328\x2d4509\x2da9fc\x2d54b1294221cc.mount has successfully entered the 'dead' state. Jan 23 16:42:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-148b63407c39f132672428daed66a2d4a583696e92d0a823d0fc0de224f90775-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:52.995826 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:52.996014 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.996089271Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.996323646Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.996197118Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:52.996448213Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:53.012989228Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/7e458abb-417d-4c87-8e34-9d678ba2ba84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:53.013010340Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:53.013153245Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/6c1c7e41-343e-438f-8eea-106487345478 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:53.013170604Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.138154487Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.138190029Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.141955162Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.142005387Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.143076730Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.143108855Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.148958466Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.149003138Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.149830941Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.149859413Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.196310394Z" level=info msg="runSandbox: deleting pod ID a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b from idIndex" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.196338154Z" level=info msg="runSandbox: removing pod sandbox a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.196352033Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.196363645Z" level=info msg="runSandbox: unmounting shmPath for sandbox a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204333089Z" level=info msg="runSandbox: deleting pod ID ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae from idIndex" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204368184Z" level=info msg="runSandbox: removing pod sandbox ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204385033Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204407793Z" level=info msg="runSandbox: unmounting shmPath for sandbox ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204333883Z" level=info msg="runSandbox: deleting pod ID d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f from idIndex" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204445369Z" level=info msg="runSandbox: removing pod sandbox d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204459986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.204473237Z" level=info msg="runSandbox: unmounting shmPath for sandbox d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.205372974Z" level=info msg="runSandbox: deleting pod ID d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64 from idIndex" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.205399702Z" level=info msg="runSandbox: removing pod sandbox d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.205413023Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.205425721Z" level=info msg="runSandbox: unmounting shmPath for sandbox d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.211347455Z" level=info msg="runSandbox: deleting pod ID 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe from idIndex" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.211371902Z" level=info msg="runSandbox: removing pod sandbox 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.211384028Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.211396426Z" level=info msg="runSandbox: unmounting shmPath for sandbox 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.216412103Z" level=info msg="runSandbox: removing pod sandbox from storage: a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.216456554Z" level=info msg="runSandbox: removing pod sandbox from storage: d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.219499959Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.219520803Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f2873d21-a003-4897-8eb8-f3800e855fa8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.219742 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.219790 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.219812 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.219859 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.222727799Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.222749447Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=8440890b-e95c-4aab-9bda-f75b27c2d929 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.222964 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.223006 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.223030 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.223075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.224501220Z" level=info msg="runSandbox: removing pod sandbox from storage: d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.224504013Z" level=info msg="runSandbox: removing pod sandbox from storage: ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.227524808Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.227543272Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=6e6e1d0b-0428-4793-96b3-26aebe1d17b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.227737 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.227768 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.227790 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.227828 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.230735336Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.230756914Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=c2e68a6e-9b1b-4094-afa1-e7bb175fc56c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.230933 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.230968 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.230988 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.231029 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.232446106Z" level=info msg="runSandbox: removing pod sandbox from storage: 43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.235651220Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.235670806Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=2d8a87a1-7768-466f-b15b-a637b9581c9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.235843 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.235875 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.235898 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.235935 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.261211 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.261398 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.261486 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.261567 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261576568Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261611901Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.261644 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261715775Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261750663Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261799138Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261830336Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261802086Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261892059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261871590Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.261978946Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.290951946Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/cbfa3750-919c-4be1-96c8-08a6707a7dab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.290974709Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.291772248Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ef34103c-3bc9-4443-908f-bb680bfbdffe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.291794676Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.293035991Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6d411d68-8ff4-4a59-ad99-56ecabc4ffa4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.293057686Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.295761812Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5ffac0ae-4db9-47f7-8b4a-607d2a8e12dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.295784052Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.296429528Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a4af1eec-8c6d-4a54-abd2-27367bb63603 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.296454601Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-41fbb591\x2d900a\x2d4c46\x2d9561\x2daf7ae93f8a56.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a864aef9\x2df88c\x2d4118\x2d81f6\x2d3193e1ad9f87.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-43e725dd3a5a8fb2f53a1c0a73f8b9fe14795bf5ac7e9d4d2ed9bd7c7cc9e7fe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-65e1f5a3\x2d5561\x2d4ab0\x2d95ba\x2de19b1a3d2b18.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d215ee9baf7903e4416deb0b4a209557fadf7b71feb4ce5b6fc9400249011c64-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-53a8d099\x2dbb08\x2d4310\x2d85c5\x2d6a6d04de44fd.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d844d8cd\x2d3957\x2d4bbf\x2da50f\x2d1d80537890ad.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d7a6edd4c31a2b7a5a9c185bd908f9cb64c8d97126135827f9172bdef81d868f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ad29a6814bbe572afbea94c80e9dd88ac6b2e3d4007ad0ef72f607f2ee2da1ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a4a0ddde5021b3165a038a112443b96324185e457c077df29c6bc3562a8c397b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.995467 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.995832643Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:54.995884547Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:54.996291 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:54.996807 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:42:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:55.007403128Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/608f55ed-3b02-40a7-b3fb-74e0eb7cb6d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:55.007423244Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:56.996274 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:42:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:56.996650258Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:56.996703059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.008661385Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ab806b17-d8ce-43ce-9aa1-a96e067e7fae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.008684884Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.032509036Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.032541547Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount has successfully entered the 'dead' state. Jan 23 16:42:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount has successfully entered the 'dead' state. Jan 23 16:42:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bb2f76aa\x2db2fa\x2d4e39\x2dab0a\x2dfc470ba55cb3.mount has successfully entered the 'dead' state. Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.072284078Z" level=info msg="runSandbox: deleting pod ID 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511 from idIndex" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.072308359Z" level=info msg="runSandbox: removing pod sandbox 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.072321497Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.072334756Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.084454264Z" level=info msg="runSandbox: removing pod sandbox from storage: 8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.087366370Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:57.087385509Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=06f40f3a-47c7-4659-88f0-a8ea4d7a5238 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:57.087590 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:42:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:57.087626 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:42:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:57.087648 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:42:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:42:57.087692 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:42:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8eb2f540990e0d16e5e8b00295faae4dbe1c49e8424faa8632f683b71ccb8511-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:58.146727567Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:58.996099 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:58.996617471Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:58.996672831Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:59.007725339Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/32fca75d-8959-4eea-9174-4ec5a03290f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:59.007751562Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:42:59.995489 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:59.995816732Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:42:59.995865232Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:00.007570205Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/383aba57-54ef-467b-9e46-c1fc2ac5e272 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:00.007593718Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:01.995870 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:01.996255680Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:01.996300139Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.007321277Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/8aedc15d-84ad-41ef-b33c-099bd2ae1a33 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.007344621Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:02.996052 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:02.996062 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.996491938Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.996697858Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.996599656Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:02.996924955Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:43:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:03.011957335Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4ed6b6ca-0921-40df-91e2-d17ba624096f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:03.011978241Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:03.013361485Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ea4ac088-6683-4cb7-a5a0-a83f4234518c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:03.013385090Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:06.997064 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:43:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:43:06.997618 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:43:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:11.996519 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:11.996949047Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:11.997002163Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:43:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:12.009421272Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/20d6544d-b9ce-49ef-940c-acd2b8703aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:12.009447772Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:17.996689 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:43:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:43:17.997213 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868761 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868957 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868963 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868971 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868976 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868983 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:27.868988 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.022531313Z" level=info msg="NetworkStart: stopping network for sandbox d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.022683817Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/98e02098-d905-4ce3-915a-ed6c49b9d2ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.022706000Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.022712787Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.022720924Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:28.142547211Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:43:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:30.996537 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:43:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:43:30.997034 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:43:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:37.021981741Z" level=info msg="NetworkStart: stopping network for sandbox f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:37.022358542Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/b6b8bc33-d8ff-4197-add2-f858d59ffb47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:37.022383466Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:37.022390439Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:37.022396657Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.025509975Z" level=info msg="NetworkStart: stopping network for sandbox 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.025661168Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/6c1c7e41-343e-438f-8eea-106487345478 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.025684389Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.025691006Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.025698441Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.026180888Z" level=info msg="NetworkStart: stopping network for sandbox 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.026301919Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/7e458abb-417d-4c87-8e34-9d678ba2ba84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.026326065Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.026333322Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:38.026339534Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.304953654Z" level=info msg="NetworkStart: stopping network for sandbox 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305117734Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ef34103c-3bc9-4443-908f-bb680bfbdffe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305144605Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305152108Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305158733Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305311106Z" level=info msg="NetworkStart: stopping network for sandbox 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305426448Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/cbfa3750-919c-4be1-96c8-08a6707a7dab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305450681Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305458797Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305466020Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.305994198Z" level=info msg="NetworkStart: stopping network for sandbox 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.306133397Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6d411d68-8ff4-4a59-ad99-56ecabc4ffa4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.306158491Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.306166563Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.306173462Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.308148089Z" level=info msg="NetworkStart: stopping network for sandbox 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.308286235Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5ffac0ae-4db9-47f7-8b4a-607d2a8e12dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.308313816Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.308322706Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.308329837Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.309642991Z" level=info msg="NetworkStart: stopping network for sandbox 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.309776398Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a4af1eec-8c6d-4a54-abd2-27367bb63603 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.309802778Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.309813609Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:39.309825469Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:40.020754661Z" level=info msg="NetworkStart: stopping network for sandbox bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:40.020890354Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/608f55ed-3b02-40a7-b3fb-74e0eb7cb6d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:40.020914268Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:40.020920765Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:40.020928221Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:42.024141296Z" level=info msg="NetworkStart: stopping network for sandbox e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:42.024309146Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ab806b17-d8ce-43ce-9aa1-a96e067e7fae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:42.024336865Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:42.024343984Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:42.024351179Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:43.996170 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:43:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:43:43.996684 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:44.020659989Z" level=info msg="NetworkStart: stopping network for sandbox 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:44.020806499Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/32fca75d-8959-4eea-9174-4ec5a03290f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:44.020832660Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:44.020839664Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:44.020845407Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:45.021569179Z" level=info msg="NetworkStart: stopping network for sandbox d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:45.021787380Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/383aba57-54ef-467b-9e46-c1fc2ac5e272 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:45.021816999Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:45.021825365Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:45.021834410Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:47.020566141Z" level=info msg="NetworkStart: stopping network for sandbox 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:47.020709790Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/8aedc15d-84ad-41ef-b33c-099bd2ae1a33 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:47.020730683Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:47.020737653Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:47.020744075Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026347044Z" level=info msg="NetworkStart: stopping network for sandbox 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026509907Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ea4ac088-6683-4cb7-a5a0-a83f4234518c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026536059Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026543356Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026551892Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026518042Z" level=info msg="NetworkStart: stopping network for sandbox 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026703481Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4ed6b6ca-0921-40df-91e2-d17ba624096f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026725404Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026734218Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:48.026741113Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:57.022392300Z" level=info msg="NetworkStart: stopping network for sandbox e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:43:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:57.022593131Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/20d6544d-b9ce-49ef-940c-acd2b8703aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:43:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:57.022619233Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:43:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:57.022626313Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:43:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:57.022632288Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:43:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:43:58.142719109Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:43:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:43:58.996976 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:43:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:43:58.997642 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:44:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:09.997106 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:44:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:09.997618 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.033521247Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.033776490Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount has successfully entered the 'dead' state. Jan 23 16:44:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount has successfully entered the 'dead' state. Jan 23 16:44:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-98e02098\x2dd905\x2d4ce3\x2d915a\x2ded6c49b9d2ea.mount has successfully entered the 'dead' state. Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.090357191Z" level=info msg="runSandbox: deleting pod ID d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89 from idIndex" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.090388589Z" level=info msg="runSandbox: removing pod sandbox d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.090409887Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.090422970Z" level=info msg="runSandbox: unmounting shmPath for sandbox d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.106437114Z" level=info msg="runSandbox: removing pod sandbox from storage: d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.109452302Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:13.109470175Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=72374c09-c75f-4771-9a73-daa00d415c1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:13.109679 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:13.109728 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:13.109753 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:13.109801 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d923d9da5efc0c82e05c5bab38dfe6d4435619d5976e0250cf92a0815efa0b89): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.033118266Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.033162525Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount has successfully entered the 'dead' state. Jan 23 16:44:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount has successfully entered the 'dead' state. Jan 23 16:44:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6b8bc33\x2dd8ff\x2d4197\x2dadd2\x2df858d59ffb47.mount has successfully entered the 'dead' state. Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.073309760Z" level=info msg="runSandbox: deleting pod ID f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56 from idIndex" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.073336560Z" level=info msg="runSandbox: removing pod sandbox f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.073352371Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.073366312Z" level=info msg="runSandbox: unmounting shmPath for sandbox f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.089472140Z" level=info msg="runSandbox: removing pod sandbox from storage: f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.093006775Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:22.093024060Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=52f30b18-cb19-4db6-bf12-ab904866eeb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:22.093235 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:22.093275 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:44:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:22.093299 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:44:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:22.093340 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f3f016dbb90e4df231ad4fd7a249ab7e3027f06beb4ce6e9c0dc20e8376b5d56): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.035808788Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.035846518Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.036451200Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.036478992Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6c1c7e41\x2d343e\x2d438f\x2d8eea\x2d106487345478.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7e458abb\x2d417d\x2d4c87\x2d8e34\x2d9d678ba2ba84.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.076303398Z" level=info msg="runSandbox: deleting pod ID 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6 from idIndex" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.076326428Z" level=info msg="runSandbox: removing pod sandbox 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.076339925Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.076351754Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.080288461Z" level=info msg="runSandbox: deleting pod ID 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06 from idIndex" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.080312782Z" level=info msg="runSandbox: removing pod sandbox 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.080325201Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.080337087Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.089411825Z" level=info msg="runSandbox: removing pod sandbox from storage: 0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.092853620Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.092872624Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1630b902-97cd-44ff-9b26-14eb08506371 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.093051 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.093094 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.093116 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.093160 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0eec42b783b8dd4934054ea9e27aa542eb8adf2faa99388d60a5cdfd9ab310e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.093437205Z" level=info msg="runSandbox: removing pod sandbox from storage: 8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.096674256Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:23.096693631Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c5aa6f80-7254-44f8-b75c-f8b42c56d36f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.096884 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.096917 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.096940 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.096980 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:23.996777 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:44:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:23.997282 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8bfa9aa15a52dd60faba43b41975cbf61898e4a207b1acb85dd0f2882345fe06-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.315757831Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.315804575Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.316464617Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.316492569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.317259399Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.317293966Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.318293941Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.318334778Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.320817507Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.320844444Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount has successfully entered the 'dead' state. Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.369309187Z" level=info msg="runSandbox: deleting pod ID 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420 from idIndex" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.369337387Z" level=info msg="runSandbox: removing pod sandbox 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.369353980Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.369367928Z" level=info msg="runSandbox: unmounting shmPath for sandbox 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373356560Z" level=info msg="runSandbox: deleting pod ID 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d from idIndex" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373386532Z" level=info msg="runSandbox: removing pod sandbox 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373400583Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373421387Z" level=info msg="runSandbox: unmounting shmPath for sandbox 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373361931Z" level=info msg="runSandbox: deleting pod ID 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d from idIndex" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373481372Z" level=info msg="runSandbox: removing pod sandbox 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373496265Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373508717Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373362321Z" level=info msg="runSandbox: deleting pod ID 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89 from idIndex" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373551991Z" level=info msg="runSandbox: removing pod sandbox 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373572285Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.373591163Z" level=info msg="runSandbox: unmounting shmPath for sandbox 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.374278610Z" level=info msg="runSandbox: deleting pod ID 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7 from idIndex" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.374305185Z" level=info msg="runSandbox: removing pod sandbox 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.374318045Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.374331621Z" level=info msg="runSandbox: unmounting shmPath for sandbox 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.385487299Z" level=info msg="runSandbox: removing pod sandbox from storage: 81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.388805579Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.388823736Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d7f5b2d2-bd6a-474d-b2bb-cd141b8febf9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.389031 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.389075 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.389098 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.389142 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.393442395Z" level=info msg="runSandbox: removing pod sandbox from storage: 379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.393458312Z" level=info msg="runSandbox: removing pod sandbox from storage: 1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.393489826Z" level=info msg="runSandbox: removing pod sandbox from storage: 282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.394438811Z" level=info msg="runSandbox: removing pod sandbox from storage: 58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.396973919Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.396998839Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=d8b74715-b66a-4e56-8abd-3a54a1abcd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.397451 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.397485 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.397507 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.397541 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.400075636Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.400094562Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f8800226-9a07-47e4-86dd-5ae5e906b930 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.400200 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.400235 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.400256 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.400291 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.403123645Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.403143710Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=ba9d96f2-a351-43af-8b46-eccd9b493c44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.403290 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.403320 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.403340 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.403376 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.406067975Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.406086211Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=edc1eff0-7a14-4f53-a492-e77212ff7596 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.406313 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.406347 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.406368 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:24.406405 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:24.426524 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:24.426682 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.426832082Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.426863931Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:24.426851 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.426945491Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:24.426956 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.426972639Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:24.427066 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427074477Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427094262Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427107166Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427109393Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427320179Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.427352748Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.456184088Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c0e416e7-5386-4fef-8590-8576b864caf2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.456219308Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.458062638Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5f305722-3c82-4ef1-81b6-f8a060e28a80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.458083709Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.459711859Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/4c0eb5a9-61ae-4e8d-95b6-e7bf4e58c33f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.459733678Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.460882132Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/80f1532d-b716-4d4d-ad3e-d1e541754916 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.460905307Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.462973738Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a0df6fd3-d75c-49c4-9aeb-948670de2a84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:24.462994074Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.032348088Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.032391016Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a4af1eec\x2d8c6d\x2d4a54\x2dabd2\x2d27367bb63603.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5ffac0ae\x2d4db9\x2d47f7\x2d8b4a\x2d607d2a8e12dc.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d411d68\x2d8ff4\x2d4a59\x2dad99\x2d56ecabc4ffa4.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ef34103c\x2d3bc9\x2d4443\x2d908f\x2dbb680bfbdffe.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1060f54b3a791d733efeda17920146cb2fa8a3f74c5e1e8ac17faa2e03be3d0d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cbfa3750\x2d919c\x2d4be1\x2d96c8\x2d08a6707a7dab.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-58d4173d16aa87141af7001f23266217dbf5ef08547a4d77746037f1eb9eeff7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-379b933d5727bb3081e0366ec0106916789ff9fd86242b1b640003e9c64ead89-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-81eef3111be5273ec4bbc9c43dcf2fe03896543ed5a6edcf98f0f046b470b420-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-282d9b870f3a66e1eaffd266dbbf0f2a52a135ff2994639d8a597d8e45b17c9d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-608f55ed\x2d3b02\x2d40a7\x2db3fb\x2d74e0eb7cb6d2.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.073339615Z" level=info msg="runSandbox: deleting pod ID bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5 from idIndex" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.073364977Z" level=info msg="runSandbox: removing pod sandbox bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.073381058Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.073401686Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.089460719Z" level=info msg="runSandbox: removing pod sandbox from storage: bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.092176628Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:25.092196954Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=94a508f7-5408-4094-839f-4f2b47135a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:25.092477 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:25.092656 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:25.092683 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:25.092736 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bb69c7d721d2b1d5e967e02dfd0791a7143c1b00399fc83cc3be8b0bd7fee0e5): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.036059615Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.036103615Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount has successfully entered the 'dead' state. Jan 23 16:44:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount has successfully entered the 'dead' state. Jan 23 16:44:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ab806b17\x2dd8ce\x2d43ce\x2d9aa1\x2da96e067e7fae.mount has successfully entered the 'dead' state. Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.082302264Z" level=info msg="runSandbox: deleting pod ID e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506 from idIndex" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.082329164Z" level=info msg="runSandbox: removing pod sandbox e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.082346537Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.082363636Z" level=info msg="runSandbox: unmounting shmPath for sandbox e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.099473605Z" level=info msg="runSandbox: removing pod sandbox from storage: e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.103021308Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.103040104Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=65704d1e-25e2-440e-aa3b-62a291720338 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:27.103234 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:27.103283 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:27.103305 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:27.103353 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(e2c1140a7c64615f0eb6f26d5e9a21b252825b55e8618fa9df92e7f91c536506): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869813 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869833 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869839 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869848 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869854 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869860 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.869866 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:27.996335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.996666816Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:27.996715693Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:28.008115439Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c432c818-d9d6-4ef6-8297-a5804a586f09 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:28.008136845Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:28.141308955Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.032504056Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.032546793Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount has successfully entered the 'dead' state. Jan 23 16:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount has successfully entered the 'dead' state. Jan 23 16:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-32fca75d\x2d8959\x2d4eea\x2d9174\x2d4ec5a03290f8.mount has successfully entered the 'dead' state. Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.075296407Z" level=info msg="runSandbox: deleting pod ID 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3 from idIndex" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.075320390Z" level=info msg="runSandbox: removing pod sandbox 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.075334165Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.075347663Z" level=info msg="runSandbox: unmounting shmPath for sandbox 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.087433078Z" level=info msg="runSandbox: removing pod sandbox from storage: 77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.090370070Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:29.090389593Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4262379e-7852-4203-8fba-3ccf09a5300b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:29.090602 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:29.090647 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:29.090670 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:29.090714 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(77acd330e2f78ed8abb01f148f8053ac175a17af6b45ce810e3356c3f37a8fb3): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.034126740Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.034319846Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount has successfully entered the 'dead' state. Jan 23 16:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount has successfully entered the 'dead' state. Jan 23 16:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-383aba57\x2d54ef\x2d467b\x2d9e46\x2dc1fc2ac5e272.mount has successfully entered the 'dead' state. Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.075305314Z" level=info msg="runSandbox: deleting pod ID d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628 from idIndex" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.075334042Z" level=info msg="runSandbox: removing pod sandbox d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.075352003Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.075364898Z" level=info msg="runSandbox: unmounting shmPath for sandbox d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.087487336Z" level=info msg="runSandbox: removing pod sandbox from storage: d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.090927347Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:30.090946296Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=15d8aeb0-75ec-47a6-8864-f9b2d524a7a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:30.091155 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:30.091212 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:30.091237 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:30.091289 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d0283e138df41ae59a155b10c3e6e16652e47222b5d049386ad50c4bd3a3d628): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.031476946Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.031513571Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount has successfully entered the 'dead' state. Jan 23 16:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount has successfully entered the 'dead' state. Jan 23 16:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8aedc15d\x2d84ad\x2d41ef\x2db33c\x2d099bd2ae1a33.mount has successfully entered the 'dead' state. Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.072284913Z" level=info msg="runSandbox: deleting pod ID 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb from idIndex" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.072308952Z" level=info msg="runSandbox: removing pod sandbox 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.072323519Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.072335898Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.088410216Z" level=info msg="runSandbox: removing pod sandbox from storage: 8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.092222118Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:32.092240126Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=8682f519-651b-4beb-8a0e-723c05dd6bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:32.092474 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:32.092518 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:32.092540 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:32.092588 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8cb06ca0cd0044136cdc90e19d1c6103109e0faf8e1e62a3f9541f5ae1391cbb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.037806530Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.037843300Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.039143524Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.039183582Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount has successfully entered the 'dead' state. Jan 23 16:44:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount has successfully entered the 'dead' state. Jan 23 16:44:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount has successfully entered the 'dead' state. Jan 23 16:44:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount has successfully entered the 'dead' state. Jan 23 16:44:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ea4ac088\x2d6683\x2d4cb7\x2da5a0\x2da83f4234518c.mount has successfully entered the 'dead' state. Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.083306925Z" level=info msg="runSandbox: deleting pod ID 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b from idIndex" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.083332156Z" level=info msg="runSandbox: removing pod sandbox 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.083347523Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.083360771Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.087306666Z" level=info msg="runSandbox: deleting pod ID 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e from idIndex" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.087337891Z" level=info msg="runSandbox: removing pod sandbox 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.087353580Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.087368157Z" level=info msg="runSandbox: unmounting shmPath for sandbox 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.095455448Z" level=info msg="runSandbox: removing pod sandbox from storage: 9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.098834223Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.098851830Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=46a9a787-75dc-4c5b-ac45-9e8c1cd92184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.099076 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.099120 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.099140 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.099182 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.099469945Z" level=info msg="runSandbox: removing pod sandbox from storage: 69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.102776132Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:33.102794006Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6456a4eb-4a8d-4954-bfd4-3eb9740f9ec3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.103001 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.103046 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.103069 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:44:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:33.103117 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:44:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4ed6b6ca\x2d0921\x2d40df\x2d91e2\x2dd17ba624096f.mount has successfully entered the 'dead' state. Jan 23 16:44:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9584cbfc744a2226f00ff7ac347682a9d52ff29c564f293c955eb4b9ff2c1c4b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-69e6d1cdeb51e04bc5f1c66730cb65d62f9ec4745a5c5a15379d606bb2989e4e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:35.996452 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:44:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:35.996585 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:35.996772568Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:35.996810347Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:35.996896701Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:35.996927896Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:35.997424 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:44:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:35.997916 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:44:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:36.011200011Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/10829865-629d-4d6c-8851-22ede51c4adc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:36.011233161Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:36.013252896Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4c631644-c0c6-408f-8f4d-22824fb4808b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:36.013288711Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:37.996675 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:44:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:37.997200973Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:37.997248994Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:38.008537968Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e8317d8-0968-446b-94d8-011cd53e2d74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:38.008560660Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1387] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1393] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1394] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1396] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1401] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492278.1406] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:44:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492279.8574] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:44:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:40.995554 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:44:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:40.995716 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:40.995891137Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:40.995925977Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:40.996063503Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:40.996113962Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.014911619Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/3c80e2b9-d60b-430f-9fd4-d930085e2505 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.014942879Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.016298606Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/700f7ed8-c7d5-4dbf-8eec-c1a66bf78542 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.016321170Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:41.996718 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.997073188Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:41.997111719Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.008370818Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/fddcd97b-786d-4672-b24f-09dda46ff2b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.008390492Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.034311271Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.034345558Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount has successfully entered the 'dead' state. Jan 23 16:44:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount has successfully entered the 'dead' state. Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.071299728Z" level=info msg="runSandbox: deleting pod ID e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692 from idIndex" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.071323616Z" level=info msg="runSandbox: removing pod sandbox e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.071336206Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.071350263Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.083405664Z" level=info msg="runSandbox: removing pod sandbox from storage: e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.086283605Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.086304609Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=3dd6e9fb-5060-49c0-9138-b7ee41bbded1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:42.086449 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:42.086490 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:42.086512 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:42.086560 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:44:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-20d6544d\x2db9ce\x2d49ef\x2d940c\x2dacd2b8703aa7.mount has successfully entered the 'dead' state. Jan 23 16:44:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5352ca90c8a8bbba3f277143bb14462a053021c246872f0c4828f8537a32692-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:42.995664 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:42.995793 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.995998600Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.996046902Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.996054010Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:42.996084361Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:43.010463921Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/7055d04f-03e1-4cac-a446-5596f0118b2a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:43.010618142Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:43.011845361Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/6780266d-57d2-45ed-ab65-bf03776733f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:43.011867554Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:45.996339 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:44:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:45.996628226Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:45.996669047Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:46.008974873Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/f9fd2a1f-2e3a-476a-ab67-7a58b571dcb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:46.009001742Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:46.995675 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:44:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:46.996074284Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:46.996126819Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:47.007278591Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9c1aafa8-efd2-4cd7-a48c-88147b8f928a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:47.007301561Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:50.996125 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:44:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:44:50.996677 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:44:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:44:52.996306 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:44:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:52.996637090Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:44:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:52.996676647Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:44:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:53.008373077Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4376328b-3c91-4a75-b60d-c33d61b681a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:44:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:53.008393764Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:44:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:44:58.144933081Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:45:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:02.996269 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:45:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:02.996949 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.470908420Z" level=info msg="NetworkStart: stopping network for sandbox dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471278574Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c0e416e7-5386-4fef-8590-8576b864caf2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471308828Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471317624Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471324692Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471643777Z" level=info msg="NetworkStart: stopping network for sandbox 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471777792Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5f305722-3c82-4ef1-81b6-f8a060e28a80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471801541Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471809879Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.471816740Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.472805986Z" level=info msg="NetworkStart: stopping network for sandbox 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.472914572Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/4c0eb5a9-61ae-4e8d-95b6-e7bf4e58c33f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.472936840Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.472944855Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.472951618Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.474633590Z" level=info msg="NetworkStart: stopping network for sandbox 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.474801457Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/80f1532d-b716-4d4d-ad3e-d1e541754916 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.474825893Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.474834160Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.474840216Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.476775671Z" level=info msg="NetworkStart: stopping network for sandbox 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.476891824Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a0df6fd3-d75c-49c4-9aeb-948670de2a84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.476912126Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.476918567Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:09.476924444Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:13.022090847Z" level=info msg="NetworkStart: stopping network for sandbox 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:13.022262121Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c432c818-d9d6-4ef6-8297-a5804a586f09 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:13.022288812Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:13.022296736Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:13.022304696Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:16.996778 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:45:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:16.997301 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024219601Z" level=info msg="NetworkStart: stopping network for sandbox e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024371594Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/10829865-629d-4d6c-8851-22ede51c4adc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024395085Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024401940Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024408845Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024554049Z" level=info msg="NetworkStart: stopping network for sandbox 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024694746Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4c631644-c0c6-408f-8f4d-22824fb4808b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024721772Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024728450Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:21.024735634Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:23.022165607Z" level=info msg="NetworkStart: stopping network for sandbox ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:23.022329294Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1e8317d8-0968-446b-94d8-011cd53e2d74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:23.022354259Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:23.022362340Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:23.022370702Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.027648159Z" level=info msg="NetworkStart: stopping network for sandbox a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.027812609Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/3c80e2b9-d60b-430f-9fd4-d930085e2505 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.027837728Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.027846279Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.027852487Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.030105790Z" level=info msg="NetworkStart: stopping network for sandbox 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.030234062Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/700f7ed8-c7d5-4dbf-8eec-c1a66bf78542 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.030258242Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.030267914Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:26.030274769Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.022087602Z" level=info msg="NetworkStart: stopping network for sandbox a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.022238145Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/fddcd97b-786d-4672-b24f-09dda46ff2b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.022261443Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.022268418Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.022274788Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870641 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870662 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870669 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870676 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870682 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870690 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:27.870697 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.876092212Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=2e8f42ba-aed3-4fa4-9f6e-4b03c3a07536 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:27.876256977Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2e8f42ba-aed3-4fa4-9f6e-4b03c3a07536 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027058016Z" level=info msg="NetworkStart: stopping network for sandbox f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027187484Z" level=info msg="NetworkStart: stopping network for sandbox 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027302675Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/7055d04f-03e1-4cac-a446-5596f0118b2a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027334773Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027343873Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027350663Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027336849Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/6780266d-57d2-45ed-ab65-bf03776733f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027433128Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027440707Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.027447332Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:28.143201197Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:45:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:29.996302 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:45:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:29.996948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:45:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:31.022320719Z" level=info msg="NetworkStart: stopping network for sandbox cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:31.022471449Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/f9fd2a1f-2e3a-476a-ab67-7a58b571dcb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:31.022498173Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:31.022505799Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:31.022512837Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:32.020286655Z" level=info msg="NetworkStart: stopping network for sandbox 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:32.020430817Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9c1aafa8-efd2-4cd7-a48c-88147b8f928a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:32.020452568Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:32.020459163Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:32.020466043Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:38.021055652Z" level=info msg="NetworkStart: stopping network for sandbox 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:38.021224025Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4376328b-3c91-4a75-b60d-c33d61b681a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:38.021248991Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:45:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:38.021255765Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:45:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:38.021262416Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:40.996350 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:40.996859 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.483099943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.483337634Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.483101954Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.483440482Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.484001612Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.484036722Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.485329883Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.485371596Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.487597059Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.487625904Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-80f1532d\x2db716\x2d4d4d\x2dad3e\x2dd1e541754916.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c0e416e7\x2d5386\x2d4fef\x2d8590\x2d8576b864caf2.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a0df6fd3\x2dd75c\x2d49c4\x2d9aeb\x2d948670de2a84.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4c0eb5a9\x2d61ae\x2d4e8d\x2d95b6\x2de7bf4e58c33f.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f305722\x2d3c82\x2d4ef1\x2d81b6\x2df8a060e28a80.mount has successfully entered the 'dead' state. Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546309877Z" level=info msg="runSandbox: deleting pod ID 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a from idIndex" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546353108Z" level=info msg="runSandbox: removing pod sandbox 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546368691Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546382080Z" level=info msg="runSandbox: unmounting shmPath for sandbox 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546318591Z" level=info msg="runSandbox: deleting pod ID 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca from idIndex" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546457785Z" level=info msg="runSandbox: removing pod sandbox 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546473956Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.546489106Z" level=info msg="runSandbox: unmounting shmPath for sandbox 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.547293672Z" level=info msg="runSandbox: deleting pod ID 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f from idIndex" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.547322466Z" level=info msg="runSandbox: removing pod sandbox 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.547336364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.547353018Z" level=info msg="runSandbox: unmounting shmPath for sandbox 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.548360167Z" level=info msg="runSandbox: deleting pod ID dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b from idIndex" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.548387899Z" level=info msg="runSandbox: removing pod sandbox dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.548401885Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.548417519Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.554297078Z" level=info msg="runSandbox: deleting pod ID 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486 from idIndex" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.554332253Z" level=info msg="runSandbox: removing pod sandbox 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.554346471Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.554362949Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.562433953Z" level=info msg="runSandbox: removing pod sandbox from storage: 92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.562475498Z" level=info msg="runSandbox: removing pod sandbox from storage: 03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.562473175Z" level=info msg="runSandbox: removing pod sandbox from storage: 825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.563430592Z" level=info msg="runSandbox: removing pod sandbox from storage: dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.565935811Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.565955709Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=4a43842e-484f-40fe-a1cb-f8666a919227 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.566579 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.566625 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.566648 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.566694 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.569952394Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.569976002Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=5fa56779-a5b7-47f1-bd40-af5ef4fd1719 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.570210 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.570244 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.570264 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.570301 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.570485190Z" level=info msg="runSandbox: removing pod sandbox from storage: 8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.573373605Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.573393487Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=87e11208-2748-4495-bba4-a1ed9fe66014 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.573540 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.573570 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.573590 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.573627 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.576686064Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.576707603Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=a6840c3b-2f71-4b92-aab9-252beeee1a61 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.576921 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.576955 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.576977 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.577014 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.580042588Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.580064895Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d4df7dc6-dbbc-4fcb-9152-a15df13d57de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.580329 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.580361 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.580382 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:54.580420 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:54.595726 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:54.595894 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:54.596076 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596069094Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596108412Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:54.596136 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:45:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:54.596167 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596253151Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596288557Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596369620Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596390006Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596397767Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596407414Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596471472Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.596485972Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.628298859Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3ab32018-bd49-4cbf-9a6d-5dac5ca60ae4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.628329617Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.628771503Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c4c7fa03-40f8-4862-9405-b3289835586e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.628791331Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.629537667Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c228b4bb-1741-4cd1-8f90-5e8ae2493793 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.629557159Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.630257878Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/9fa2a301-9516-419b-a8d3-8e617ea464e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.630282222Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.630943759Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d4fe6358-4980-464e-8721-d3298c16815a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:45:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:54.630967687Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:45:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8fd5440e5c9c44950cac77843a8146195ccaf9554fd2e8a1cf80f6786c74b486-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-92e8f663d6a9a06b46fc27fc30592e967bc0038c6cc030f4cc7c5b7dd84b704a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-825ba44da343477d46062ed7384134cb99e4b25a8726be910c8732503859984f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-03e12b56afa2cfe56c49351f7c53bc79dc45f6999becb679b36df3be8520d9ca-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dd581047bb0b3ef046c1818273aa1fa8416e19d692f90afec918dd0c5b5a4e1b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:45:55.996874 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:45:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:55.997552 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.033481005Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.033524013Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount has successfully entered the 'dead' state. Jan 23 16:45:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount has successfully entered the 'dead' state. Jan 23 16:45:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c432c818\x2dd9d6\x2d4ef6\x2d8297\x2da5804a586f09.mount has successfully entered the 'dead' state. Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.082328957Z" level=info msg="runSandbox: deleting pod ID 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed from idIndex" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.082360072Z" level=info msg="runSandbox: removing pod sandbox 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.082376463Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.082397818Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.094455344Z" level=info msg="runSandbox: removing pod sandbox from storage: 9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.097738660Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.097758956Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4f2b4cba-5361-49e6-b3e6-310fa3eb81ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:45:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:58.097960 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:45:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:58.098006 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:45:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:58.098033 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:45:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:45:58.098090 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(9dd9869d17920d7ad2ccac472b37f3df8833a79c5c2c159dcf84a8c6ab04a7ed): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:45:58.142312462Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:45:59 hub-master-0.workload.bos2.lab conmon[51470]: conmon b70eaf26f79b964d71d0 : container 51481 exited with status 1 Jan 23 16:45:59 hub-master-0.workload.bos2.lab systemd[1]: crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has successfully entered the 'dead' state. Jan 23 16:45:59 hub-master-0.workload.bos2.lab systemd[1]: crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope: Consumed 3.714s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope completed and consumed the indicated resources. Jan 23 16:45:59 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope has successfully entered the 'dead' state. Jan 23 16:45:59 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1.scope completed and consumed the indicated resources. Jan 23 16:46:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:00.608007 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1" exitCode=1 Jan 23 16:46:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:00.608037 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1} Jan 23 16:46:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:00.608062 8631 scope.go:115] "RemoveContainer" containerID="6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868" Jan 23 16:46:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:00.608338 8631 scope.go:115] "RemoveContainer" containerID="b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.608831472Z" level=info msg="Removing container: 6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868" id=fa18f7a8-c17a-4a09-a09a-cdb6027c5052 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.608939525Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=9e084ac2-a668-4a39-92a6-011e474380ff name=/runtime.v1.ImageService/ImageStatus Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.609345633Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9e084ac2-a668-4a39-92a6-011e474380ff name=/runtime.v1.ImageService/ImageStatus Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.609897240Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=fc6f290e-a5e2-4154-bd09-a84110802f70 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.609997428Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fc6f290e-a5e2-4154-bd09-a84110802f70 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.610647412Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=ba3001da-4a3e-4ebb-891d-73d63f1451c5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.610731671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:00 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-f393cf59c060ff89ad048a98adf6dd2891cc444b20be0faca7eeefacd70d9c36-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-f393cf59c060ff89ad048a98adf6dd2891cc444b20be0faca7eeefacd70d9c36-merged.mount has successfully entered the 'dead' state. Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.655941650Z" level=info msg="Removed container 6b3b1e52cdbeeba69186724601847f3ae37ba7cc907a7438864dff834a4d2868: openshift-multus/multus-cdt6c/kube-multus" id=fa18f7a8-c17a-4a09-a09a-cdb6027c5052 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:46:00 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope. -- Subject: Unit crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:46:00 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac. -- Subject: Unit crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.754647252Z" level=info msg="Created container 7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac: openshift-multus/multus-cdt6c/kube-multus" id=ba3001da-4a3e-4ebb-891d-73d63f1451c5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.755013222Z" level=info msg="Starting container: 7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac" id=d0b82a1f-3c1a-4307-b2b9-36f2b5dc95a5 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.761826117Z" level=info msg="Started container" PID=69675 containerID=7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac description=openshift-multus/multus-cdt6c/kube-multus id=d0b82a1f-3c1a-4307-b2b9-36f2b5dc95a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.766480339Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_11b02700-e928-4ddf-bed8-789d871aa5b0\"" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.775964040Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.775984120Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.786626122Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.796134989Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.796152449Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:46:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:00.796164136Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_11b02700-e928-4ddf-bed8-789d871aa5b0\"" Jan 23 16:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:01.615242 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac} Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.036053750Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.036262095Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.036494725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.036533920Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4c631644\x2dc0c6\x2d408f\x2d8f4d\x2d22824fb4808b.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-10829865\x2d629d\x2d4d6c\x2d8851\x2d22ede51c4adc.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071287824Z" level=info msg="runSandbox: deleting pod ID e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e from idIndex" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071315675Z" level=info msg="runSandbox: removing pod sandbox e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071330915Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071346398Z" level=info msg="runSandbox: unmounting shmPath for sandbox e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071333012Z" level=info msg="runSandbox: deleting pod ID 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb from idIndex" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071431865Z" level=info msg="runSandbox: removing pod sandbox 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071450709Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.071466986Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.092435288Z" level=info msg="runSandbox: removing pod sandbox from storage: e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.092464023Z" level=info msg="runSandbox: removing pod sandbox from storage: 0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.096095022Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.096113694Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0cd1506f-b080-49d3-b142-4c48c6d12a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.096275 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.096317 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.096337 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.096380 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(e83267f83ddbdb8a3779ea919c00b09b75b1e6dbb9fba75b25ce578a2ed0ac5e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.099629325Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:06.099651789Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=54f596fe-7609-4f4c-8c18-ca815fd8bbb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.099865 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.099902 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.099937 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.099982 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(0e891f2cbb1eff145996ede65556215e801b2e9a18c9f1902d0d5da2b9e11feb): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:06.996095 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:46:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:06.996618 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.034272182Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.034316193Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount has successfully entered the 'dead' state. Jan 23 16:46:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount has successfully entered the 'dead' state. Jan 23 16:46:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1e8317d8\x2d0968\x2d446b\x2d94d8\x2d011cd53e2d74.mount has successfully entered the 'dead' state. Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.072282790Z" level=info msg="runSandbox: deleting pod ID ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3 from idIndex" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.072309741Z" level=info msg="runSandbox: removing pod sandbox ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.072324724Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.072338294Z" level=info msg="runSandbox: unmounting shmPath for sandbox ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.085430932Z" level=info msg="runSandbox: removing pod sandbox from storage: ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.088778564Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:08.088796639Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b208588a-86e9-4b96-9099-cd8be538ae2a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:08.089020 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:08.089067 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:46:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:08.089088 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:46:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:08.089134 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ced8b54f33dcdd27719796aca1ca6ed763712c6649236edd55d2eb3d98e294e3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1360] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1365] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1366] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1612] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1614] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1626] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1629] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1629] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1631] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1634] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492368.1638] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:46:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492369.4944] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.039655368Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.039696747Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.040045302Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.040078452Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076295721Z" level=info msg="runSandbox: deleting pod ID 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a from idIndex" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076332883Z" level=info msg="runSandbox: removing pod sandbox 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076354057Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076367676Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076300256Z" level=info msg="runSandbox: deleting pod ID a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca from idIndex" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076420335Z" level=info msg="runSandbox: removing pod sandbox a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076435160Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.076449296Z" level=info msg="runSandbox: unmounting shmPath for sandbox a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.088483496Z" level=info msg="runSandbox: removing pod sandbox from storage: a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.088486415Z" level=info msg="runSandbox: removing pod sandbox from storage: 3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.091972583Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.091992085Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=febbbcef-491a-4a24-8014-6f3bfd731f6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.092305 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.092352 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.092377 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.092425 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.095177043Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:11.095196243Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8ad6e449-e374-450c-9693-a25997866ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.095339 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.095373 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.095393 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:11.095431 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-700f7ed8\x2dc7d5\x2d4dbf\x2d8eec\x2dc1a66bf78542.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c80e2b9\x2dd60b\x2d430f\x2d9fd4\x2dd930085e2505.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3e4f14937c5584ae41f3f09bb2eeeb280d1693c0265a5ac61ede48869ef5976a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a724a05a0af8b09792976647c3e9b050eadc4f4b32f672fe8888d579bbc373ca-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.033352002Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.033389644Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount has successfully entered the 'dead' state. Jan 23 16:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount has successfully entered the 'dead' state. Jan 23 16:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fddcd97b\x2d786d\x2d4672\x2db24f\x2d09dda46ff2b2.mount has successfully entered the 'dead' state. Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.072382988Z" level=info msg="runSandbox: deleting pod ID a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc from idIndex" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.072408780Z" level=info msg="runSandbox: removing pod sandbox a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.072422176Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.072434926Z" level=info msg="runSandbox: unmounting shmPath for sandbox a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.088435259Z" level=info msg="runSandbox: removing pod sandbox from storage: a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.092001260Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.092020565Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=32f093ac-830b-4cbb-bf73-e506dcfd4a51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:12.092255 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:12.092414 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:12.092438 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:12.092490 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(a3d24243d0337d0f368b2033343c4565b076180fd8fc0ed19809375153dcd9bc): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:12.995531 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.995793709Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:12.995831149Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.008915640Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/061b5ebf-875e-4f34-b78b-028d6820b11c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.008946725Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.038591852Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.038634148Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.038812395Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.038846176Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount has successfully entered the 'dead' state. Jan 23 16:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount has successfully entered the 'dead' state. Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.077305506Z" level=info msg="runSandbox: deleting pod ID f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6 from idIndex" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.077330546Z" level=info msg="runSandbox: removing pod sandbox f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.077345462Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.077359440Z" level=info msg="runSandbox: unmounting shmPath for sandbox f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.081304452Z" level=info msg="runSandbox: deleting pod ID 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf from idIndex" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.081328210Z" level=info msg="runSandbox: removing pod sandbox 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.081342305Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.081356282Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.094447107Z" level=info msg="runSandbox: removing pod sandbox from storage: f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.097155760Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.097175903Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=70635955-f920-4339-997b-816a9d50e604 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.097559 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.097607 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.097635 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.097693 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.103404791Z" level=info msg="runSandbox: removing pod sandbox from storage: 2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.106608178Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:13.106626580Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6050056a-72a3-4f2a-99a0-8a2609eb090f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.106793 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.106828 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.106847 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:13.106884 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount has successfully entered the 'dead' state. Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6780266d\x2d57d2\x2d45ed\x2dab65\x2dbf03776733f4.mount has successfully entered the 'dead' state. Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount has successfully entered the 'dead' state. Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7055d04f\x2d03e1\x2d4cac\x2da446\x2d5596f0118b2a.mount has successfully entered the 'dead' state. Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2f211b09d7877d3caf0e0cb55e9ba585b0e8b5397c59244cb8002b4659a773bf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f59edbb766d52626fd3e4efd0a00cf1865c8e0c979303fd95bcbbaebdc37e3a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.032394100Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.032432881Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount has successfully entered the 'dead' state. Jan 23 16:46:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount has successfully entered the 'dead' state. Jan 23 16:46:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f9fd2a1f\x2d2e3a\x2d476a\x2dab67\x2d7a58b571dcb8.mount has successfully entered the 'dead' state. Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.068314031Z" level=info msg="runSandbox: deleting pod ID cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5 from idIndex" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.068342246Z" level=info msg="runSandbox: removing pod sandbox cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.068358592Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.068372949Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.088461332Z" level=info msg="runSandbox: removing pod sandbox from storage: cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.091888121Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:16.091907305Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=8752e662-bc0d-4303-9fd0-63a7426b9cf2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:16.092212 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:16.092258 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:46:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:16.092280 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:46:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:16.092326 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(cc6cfd68d5687bf5179cc6c6e4f5172ebf41ef1efe7ea0e4fd1f09cbdd8533e5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.031278068Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.031316727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount has successfully entered the 'dead' state. Jan 23 16:46:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount has successfully entered the 'dead' state. Jan 23 16:46:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9c1aafa8\x2defd2\x2d4cd7\x2da48c\x2d88147b8f928a.mount has successfully entered the 'dead' state. Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.072304272Z" level=info msg="runSandbox: deleting pod ID 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374 from idIndex" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.072329237Z" level=info msg="runSandbox: removing pod sandbox 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.072344854Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.072357329Z" level=info msg="runSandbox: unmounting shmPath for sandbox 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.088432561Z" level=info msg="runSandbox: removing pod sandbox from storage: 034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.091931560Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:17.091949372Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0536dcc1-4cce-4033-82df-a547ee0dc1c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:17.092156 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:17.092198 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:17.092226 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:17.092274 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(034ac51cd491bda6f491d7672636a98bbd4edd231141aa6eba80162e45396374): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:17.997829 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:46:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:17.998367 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:46:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:18.995638 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:46:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:18.995984092Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:18.996023665Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:19.008676447Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/32aeb05f-58bf-4366-bcfa-16ffe16a4203 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:19.008701241Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:20.995582 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:20.995730 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:46:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:20.995914767Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:20.995952966Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:20.996016947Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:20.996061052Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:21.011329759Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4793e499-77c3-4262-8218-db608ad7ab72 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:21.011543682Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:21.011336801Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/f06c51d1-ed7d-4005-bf6b-c4cdc32dab27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:21.011710999Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.031836893Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.031875346Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount has successfully entered the 'dead' state. Jan 23 16:46:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount has successfully entered the 'dead' state. Jan 23 16:46:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4376328b\x2d3c91\x2d4a75\x2db60d\x2dc33d61b681a6.mount has successfully entered the 'dead' state. Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.067277100Z" level=info msg="runSandbox: deleting pod ID 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3 from idIndex" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.067302135Z" level=info msg="runSandbox: removing pod sandbox 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.067316342Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.067329748Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.083461696Z" level=info msg="runSandbox: removing pod sandbox from storage: 6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.086700416Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:23.086720537Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e6308c54-7cda-4960-8fdb-0efae345618e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:23.086973 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:46:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:23.087021 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:23.087045 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:23.087100 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6684e2b0bb0f1c3e1b632c72f427d1aaffd800afa923c157959288ea22a82fd3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:25.995954 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:25.996098 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:25.996214 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:25.996331 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996500152Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996553933Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996591583Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996626448Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996666080Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996680741Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996705612Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:25.996637187Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.026868777Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/1d4712b6-7f67-4695-8b6b-c67aeaa9a3ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.026898467Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.027504714Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5e5dcf4c-a157-46d5-87b2-4ce66ba007aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.027524751Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.028327119Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/679f7075-92dc-4856-9610-4237cfb6f8f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.028348646Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.029067594Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/6c598135-6b02-49a0-9e8c-c84d9d00a86f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.029088846Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:26.996254 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.996487256Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.996523451Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:26.996529 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.996891323Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:26.996940633Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:27.011610047Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/f58995f4-b864-488b-b595-1a4b38be925f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:27.011633484Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:27.012298573Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5c2aa717-e61a-4b06-84ad-b4f692ad1bcc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:27.012318161Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870794 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870815 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870821 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870830 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870836 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870844 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:27.870850 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:28.144138478Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:46:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:28.996269 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:46:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:28.996901 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:46:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:30.995425 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:46:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:30.995763045Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:30.995802273Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:31.007639466Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/fb7a279b-df91-43c4-a208-9733990ba418 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:31.007812459Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:34.995473 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:34.995792241Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:34.995848993Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:46:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:35.011163883Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bb98ec12-6cbe-43a0-b2f0-16106fc13736 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:35.011193727Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.641862587Z" level=info msg="NetworkStart: stopping network for sandbox 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.641914060Z" level=info msg="NetworkStart: stopping network for sandbox e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642080119Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c228b4bb-1741-4cd1-8f90-5e8ae2493793 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642080287Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3ab32018-bd49-4cbf-9a6d-5dac5ca60ae4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642126255Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642133960Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642141287Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642105766Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642258563Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.642269667Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.643966267Z" level=info msg="NetworkStart: stopping network for sandbox eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644092762Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/9fa2a301-9516-419b-a8d3-8e617ea464e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644116395Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644123116Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644128785Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644527908Z" level=info msg="NetworkStart: stopping network for sandbox f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644642271Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c4c7fa03-40f8-4862-9405-b3289835586e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644662699Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644669060Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.644675993Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.648575708Z" level=info msg="NetworkStart: stopping network for sandbox 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.648689519Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d4fe6358-4980-464e-8721-d3298c16815a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.648713233Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.648720657Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:39.648727371Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:43.997026 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:46:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:43.997606 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:46:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:46:54.997097 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:46:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:46:54.997906 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.022845509Z" level=info msg="NetworkStart: stopping network for sandbox 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.022992125Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/061b5ebf-875e-4f34-b78b-028d6820b11c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.023016165Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.023024156Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.023032565Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:46:58.143470598Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:47:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:04.022539602Z" level=info msg="NetworkStart: stopping network for sandbox 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:04.022692665Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/32aeb05f-58bf-4366-bcfa-16ffe16a4203 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:04.022717839Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:04.022726736Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:04.022733235Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.025307471Z" level=info msg="NetworkStart: stopping network for sandbox e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.025448861Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4793e499-77c3-4262-8218-db608ad7ab72 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.025470864Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.025477683Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.025483786Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.026920129Z" level=info msg="NetworkStart: stopping network for sandbox d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.027063613Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/f06c51d1-ed7d-4005-bf6b-c4cdc32dab27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.027086804Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.027094322Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:06.027102620Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:07.997138 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:47:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:07.997649 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.040721134Z" level=info msg="NetworkStart: stopping network for sandbox f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.040824502Z" level=info msg="NetworkStart: stopping network for sandbox 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041051902Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5e5dcf4c-a157-46d5-87b2-4ce66ba007aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041083529Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041092795Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041100129Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041086558Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/1d4712b6-7f67-4695-8b6b-c67aeaa9a3ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041178783Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041185368Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041192777Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041285012Z" level=info msg="NetworkStart: stopping network for sandbox 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041406891Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/679f7075-92dc-4856-9610-4237cfb6f8f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041428407Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041437362Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041443594Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041483153Z" level=info msg="NetworkStart: stopping network for sandbox b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041597538Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/6c598135-6b02-49a0-9e8c-c84d9d00a86f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041619309Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041626287Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:11.041631908Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026174937Z" level=info msg="NetworkStart: stopping network for sandbox 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026173440Z" level=info msg="NetworkStart: stopping network for sandbox af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026361899Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5c2aa717-e61a-4b06-84ad-b4f692ad1bcc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026390894Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026398182Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026407313Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026428046Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/f58995f4-b864-488b-b595-1a4b38be925f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026455256Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026462175Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:12.026468600Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:16.020634376Z" level=info msg="NetworkStart: stopping network for sandbox a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:16.020775433Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/fb7a279b-df91-43c4-a208-9733990ba418 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:16.020799046Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:16.020805378Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:16.020811322Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:20.024910474Z" level=info msg="NetworkStart: stopping network for sandbox baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:20.025068326Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bb98ec12-6cbe-43a0-b2f0-16106fc13736 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:20.025093151Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:47:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:20.025100710Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:47:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:20.025107606Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:21.997028 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.997842386Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d2d3f3a0-81f1-42a0-a37a-0bacb75fe2a5 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.998021512Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d2d3f3a0-81f1-42a0-a37a-0bacb75fe2a5 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.998506153Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=db5dc587-c9f3-447b-999b-082f10879558 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.998646706Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=db5dc587-c9f3-447b-999b-082f10879558 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.999495592Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8376393f-071c-412c-afa1-4dad9c79aec5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:21.999582309Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope. -- Subject: Unit crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9. -- Subject: Unit crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.119877935Z" level=info msg="Created container 20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8376393f-071c-412c-afa1-4dad9c79aec5 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.120283685Z" level=info msg="Starting container: 20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" id=6d0f5f53-f730-48ef-bbec-0d941b25fdf3 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.140937237Z" level=info msg="Started container" PID=72241 containerID=20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=6d0f5f53-f730-48ef-bbec-0d941b25fdf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.146088112Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.156069458Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.156092023Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.156105320Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.166240228Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.166265133Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.166279253Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.175153672Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.175171863Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.175186907Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.183745018Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.183904965Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.183915427Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.191698318Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:22.191714933Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:47:22 hub-master-0.workload.bos2.lab conmon[72228]: conmon 20f6bd9bd07e4073a994 : container 72241 exited with status 1 Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has successfully entered the 'dead' state. Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope: Consumed 574ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope completed and consumed the indicated resources. Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope has successfully entered the 'dead' state. Jan 23 16:47:22 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope: Consumed 49ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9.scope completed and consumed the indicated resources. Jan 23 16:47:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:22.768357 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/184.log" Jan 23 16:47:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:22.769937 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9} Jan 23 16:47:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:22.770224 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.773949 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/185.log" Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.774636 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/184.log" Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.779002 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" exitCode=1 Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.779038 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9} Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.779058 8631 scope.go:115] "RemoveContainer" containerID="32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:23.780071 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:47:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:23.780634 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:23.780930439Z" level=info msg="Removing container: 32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335" id=d3a6c2e7-b268-4908-be31-6597977b63f8 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:47:23 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-bc485f93c2f3c85d72b6af67b1b1005f1ffd6db1f3db1e33fa66ebbfedacb2cd-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bc485f93c2f3c85d72b6af67b1b1005f1ffd6db1f3db1e33fa66ebbfedacb2cd-merged.mount has successfully entered the 'dead' state. Jan 23 16:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:23.821395365Z" level=info msg="Removed container 32e4f1a74aa7c06d7dc22e9be8475398ca7cc176e409d74415330104939e9335: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=d3a6c2e7-b268-4908-be31-6597977b63f8 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.653221342Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.653265180Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.653233904Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.653339152Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.654162032Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.654210761Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.655850230Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.655882052Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.658638719Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.658673112Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c228b4bb\x2d1741\x2d4cd1\x2d8f90\x2d5e8ae2493793.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ab32018\x2dbd49\x2d4cbf\x2d9a6d\x2d5dac5ca60ae4.mount has successfully entered the 'dead' state. Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692361359Z" level=info msg="runSandbox: deleting pod ID 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6 from idIndex" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692386576Z" level=info msg="runSandbox: removing pod sandbox 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692406735Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692415258Z" level=info msg="runSandbox: deleting pod ID e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9 from idIndex" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692443489Z" level=info msg="runSandbox: removing pod sandbox e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692418955Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692456127Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.692570718Z" level=info msg="runSandbox: unmounting shmPath for sandbox e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701293526Z" level=info msg="runSandbox: deleting pod ID 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60 from idIndex" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701309384Z" level=info msg="runSandbox: deleting pod ID f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb from idIndex" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701324544Z" level=info msg="runSandbox: removing pod sandbox 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701336269Z" level=info msg="runSandbox: removing pod sandbox f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701343064Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701350154Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701358654Z" level=info msg="runSandbox: unmounting shmPath for sandbox 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.701364137Z" level=info msg="runSandbox: unmounting shmPath for sandbox f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.708336350Z" level=info msg="runSandbox: deleting pod ID eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c from idIndex" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.708361310Z" level=info msg="runSandbox: removing pod sandbox eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.708374742Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.708387956Z" level=info msg="runSandbox: unmounting shmPath for sandbox eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.708490915Z" level=info msg="runSandbox: removing pod sandbox from storage: e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.711897869Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.711917250Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=06ed74a6-21b4-4805-8460-645990278934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.712122 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.712167 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.712190 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.712241 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.716438590Z" level=info msg="runSandbox: removing pod sandbox from storage: 7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.719937588Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.719954739Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=0f6cec81-3269-4452-930c-b857bb528fe5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.720122 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.720161 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.720183 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.720227 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.724442470Z" level=info msg="runSandbox: removing pod sandbox from storage: 627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.725448058Z" level=info msg="runSandbox: removing pod sandbox from storage: f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.727751837Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.727771698Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=a7a1f7bf-50fa-4135-809a-48f76fdcff90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.727905 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.727937 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.727958 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.728010 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.728431257Z" level=info msg="runSandbox: removing pod sandbox from storage: eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.730996813Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.731014055Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e9c561f7-1c3d-454a-89ba-2e238d6968dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.731262 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.731304 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.731325 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.731364 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.734385459Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.734402944Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=54a8a6e6-b67b-451e-a899-549b836214dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.734613 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.734648 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.734670 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.734714 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.782533 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/185.log" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784089 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784151 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784307170Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784335836Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784449439Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784477348Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784360 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784419 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784548 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784606551Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784639620Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784665892Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784693525Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784735017Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.784752791Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:24.784847 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:47:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:24.785366 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.803824934Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/1648f6df-7440-4ead-9369-22361948ec13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.803848045Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.803949134Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/092be225-6080-45ff-ac35-423f89bedb4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.803965813Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.814055732Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/40c068be-3ac7-464b-b68a-f94ac7e4ec7e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.814081817Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.817918906Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/31cd4f49-aed8-4aa1-a821-64dced626435 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.817940917Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.818982253Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/e41b8a8a-60e6-4cf0-8f33-5c40ac617917 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:24.819005369Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d4fe6358\x2d4980\x2d464e\x2d8721\x2dd3298c16815a.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9fa2a301\x2d9516\x2d419b\x2da8d3\x2d8e617ea464e0.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c4c7fa03\x2d40f8\x2d4862\x2d9405\x2db3289835586e.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f720116af05a5133b691f5346c0def730cf5a33018c9a7542043c548e83a53eb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-627fe90a914a8aeff3a4b8ecc7018ecfe13de6290a626b2c474859bf40a2bf60-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-eb6742b4b83e7b84e304c22dd84f5deee4f72d67b54ac806d2d53f088a00ca5c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7c429300d4e2c031b9383120e24650fae12dd0bc2b755bb48c607036ec6cdba6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e4796449580d7425b0bfe3bb33e921b2dda3a20900d564bccbf21075a5e03bb9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.871943 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872089 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872096 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872102 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872110 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872116 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:27.872124 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:47:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:28.142413080Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:47:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:36.996833 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:47:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:36.997375 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:47:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492458.1225] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:47:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492458.1230] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:47:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492458.1232] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:47:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492458.1512] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:47:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492458.1513] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.033455559Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.033702085Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount has successfully entered the 'dead' state. Jan 23 16:47:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount has successfully entered the 'dead' state. Jan 23 16:47:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-061b5ebf\x2d875e\x2d4f34\x2db78b\x2d028d6820b11c.mount has successfully entered the 'dead' state. Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.070286974Z" level=info msg="runSandbox: deleting pod ID 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67 from idIndex" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.070312475Z" level=info msg="runSandbox: removing pod sandbox 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.070325651Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.070337838Z" level=info msg="runSandbox: unmounting shmPath for sandbox 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.086436607Z" level=info msg="runSandbox: removing pod sandbox from storage: 94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.089654190Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:43.089685069Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=ede1c77e-5760-4a06-9eba-0efc1864137d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:43.089846 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:43.090014 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:43.090042 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:43.090093 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(94061e5c403c54ec06be3fbaee6808107432b058934ce1185992b2b42a2d7d67): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:47:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:48.996570 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:47:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:48.997109 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.033702117Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.033741927Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount has successfully entered the 'dead' state. Jan 23 16:47:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount has successfully entered the 'dead' state. Jan 23 16:47:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-32aeb05f\x2d58bf\x2d4366\x2dbcfa\x2d16ffe16a4203.mount has successfully entered the 'dead' state. Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.077282556Z" level=info msg="runSandbox: deleting pod ID 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97 from idIndex" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.077307704Z" level=info msg="runSandbox: removing pod sandbox 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.077323492Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.077336271Z" level=info msg="runSandbox: unmounting shmPath for sandbox 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.093456560Z" level=info msg="runSandbox: removing pod sandbox from storage: 24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.097076719Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:49.097094834Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=23ffca72-e947-40aa-9743-df5095a6ac90 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:49.097217 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:49.097253 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:47:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:49.097275 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:47:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:49.097313 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(24f3a253ca2c19a639fa903f3f8f46641f3358a00e069d0eaaae464dd2834c97): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.035904122Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.035938967Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.037799106Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.037840484Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076292820Z" level=info msg="runSandbox: deleting pod ID e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d from idIndex" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076318662Z" level=info msg="runSandbox: removing pod sandbox e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076331473Z" level=info msg="runSandbox: deleting pod ID d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86 from idIndex" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076363888Z" level=info msg="runSandbox: removing pod sandbox d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076335077Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076403269Z" level=info msg="runSandbox: unmounting shmPath for sandbox e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076427548Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.076445283Z" level=info msg="runSandbox: unmounting shmPath for sandbox d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.093412881Z" level=info msg="runSandbox: removing pod sandbox from storage: d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.093467311Z" level=info msg="runSandbox: removing pod sandbox from storage: e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.096906112Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.096927594Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=a3495b18-9903-455c-b1df-cf7d9e648451 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.097065 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.097111 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.097134 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.097187 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.100125373Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:51.100145939Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e81c2e7a-b0a1-4341-aac3-e4f80a6e3ad5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.100355 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.100396 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.100418 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:47:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:51.100464 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4793e499\x2d77c3\x2d4262\x2d8218\x2ddb608ad7ab72.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f06c51d1\x2ded7d\x2d4005\x2dbf6b\x2dc4cdc32dab27.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e0586b82f072af27bac8b61cb30c55a2629e5f02bc2cd67454e0caf729e8993d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d2b70f734d7e29b13113820aaee167e8212d2a038c5a5f7f7e49664022f35c86-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.051764943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052047766Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.051787345Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052161919Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052473274Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052509870Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052868819Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.052902481Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount has successfully entered the 'dead' state. Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.102307713Z" level=info msg="runSandbox: deleting pod ID 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a from idIndex" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.102336831Z" level=info msg="runSandbox: removing pod sandbox 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.102351836Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.102365874Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.103306989Z" level=info msg="runSandbox: deleting pod ID f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371 from idIndex" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.103331083Z" level=info msg="runSandbox: removing pod sandbox f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.103344158Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.103357429Z" level=info msg="runSandbox: unmounting shmPath for sandbox f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106307665Z" level=info msg="runSandbox: deleting pod ID b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b from idIndex" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106340124Z" level=info msg="runSandbox: removing pod sandbox b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106355698Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106370102Z" level=info msg="runSandbox: unmounting shmPath for sandbox b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106413765Z" level=info msg="runSandbox: deleting pod ID 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec from idIndex" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106440881Z" level=info msg="runSandbox: removing pod sandbox 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106453793Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.106474143Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.118511672Z" level=info msg="runSandbox: removing pod sandbox from storage: 8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.118524361Z" level=info msg="runSandbox: removing pod sandbox from storage: f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.118537535Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.126191837Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.126221935Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=beeda064-f9a4-4cc9-bf4a-620c50b0f012 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.126429098Z" level=info msg="runSandbox: removing pod sandbox from storage: b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.126521 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.126568 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.126594 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.126639 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.129415174Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.129432264Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b97cbc4d-d015-4efa-bc0d-7279ede9cd05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.129629 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.129671 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.129694 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.129736 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.132351480Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.132370015Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ae145397-19b2-4ac1-9420-83bdef8e2666 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.132608 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.132641 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.132662 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.132703 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.135432989Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.135456444Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=35ef69e6-02a8-4ad5-b953-dd1d2c4ad44e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.135708 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.135745 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.135768 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:56.135807 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:47:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:47:56.995474 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.995800189Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:56.995837365Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.006819592Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/08cf681a-21ed-4263-ab59-93c7b8d8c9fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.006840377Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.037328482Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.037363258Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.037502483Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.037533508Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6c598135\x2d6b02\x2d49a0\x2d9e8c\x2dc84d9d00a86f.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-679f7075\x2d92dc\x2d4856\x2d9610\x2d4237cfb6f8f2.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5e5dcf4c\x2da157\x2d46d5\x2d87b2\x2d4ce66ba007aa.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1d4712b6\x2d7f67\x2d4695\x2d8b6b\x2dc67aeaa9a3ab.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b2442830a987813c551f47c10306b8eb8e732a8d6ad083e1264bf7dba9c7569b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2a4be559cd5c8fa793ccc4f2b3360dfa80262c37e998a642b25426410b6ca9ec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f4379c38086920e928fad6a019bac25dcbf05ac20712299296b46c85e7ab7371-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8b06f7f586897652578903fe2d5ebd4a8ddf53c81acb1ab1b1b2c0040293d72a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5c2aa717\x2de61a\x2d4b06\x2d84ad\x2db4f692ad1bcc.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f58995f4\x2db864\x2d488b\x2db595\x2d1a4b38be925f.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079329107Z" level=info msg="runSandbox: deleting pod ID af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095 from idIndex" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079358477Z" level=info msg="runSandbox: removing pod sandbox af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079374384Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079389221Z" level=info msg="runSandbox: unmounting shmPath for sandbox af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079334977Z" level=info msg="runSandbox: deleting pod ID 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035 from idIndex" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079449090Z" level=info msg="runSandbox: removing pod sandbox 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079465120Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.079480007Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.091438467Z" level=info msg="runSandbox: removing pod sandbox from storage: 7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.091474261Z" level=info msg="runSandbox: removing pod sandbox from storage: af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.094090033Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.094109088Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=c149cc19-26dd-4b0c-b9bf-d5b3270b2b7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.094429 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.094479 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.094500 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.094550 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(7752bc28acbf3276d485ad20978681034752ae7a13f90b615ba503d6e5128035): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.097632471Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:57.097652256Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=436c017a-df2b-46e1-b893-3d58de5480f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.097841 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.097886 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.097911 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:47:57.097961 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(af7758419937b3254082b2d8f78d80629f052156d6ced5154427f0593d40d095): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:47:58.143525308Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:48:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:00.996758 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:48:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:00.997402 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.032379684Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.032421904Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount has successfully entered the 'dead' state. Jan 23 16:48:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount has successfully entered the 'dead' state. Jan 23 16:48:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fb7a279b\x2ddf91\x2d43c4\x2da208\x2d9733990ba418.mount has successfully entered the 'dead' state. Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.082353467Z" level=info msg="runSandbox: deleting pod ID a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181 from idIndex" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.082389933Z" level=info msg="runSandbox: removing pod sandbox a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.082406980Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.082421506Z" level=info msg="runSandbox: unmounting shmPath for sandbox a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.102457935Z" level=info msg="runSandbox: removing pod sandbox from storage: a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.105929737Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:01.105948890Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d420cdbb-2fdf-49d4-8e41-69407aa013e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:01.106165 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:01.106211 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:48:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:01.106236 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:48:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:01.106283 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:48:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a93dfe8c546c7247c5b33c540b27b7c76b5d2a8c766c0e4f535787f3fca05181-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:02.995977 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:48:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:02.996342653Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:02.996382200Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:03.007881741Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/d18c4304-5ea5-4aa9-908b-1d846439594c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:03.007905774Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.036363196Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.036402600Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount has successfully entered the 'dead' state. Jan 23 16:48:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount has successfully entered the 'dead' state. Jan 23 16:48:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bb98ec12\x2d6cbe\x2d43a0\x2db2f0\x2d16106fc13736.mount has successfully entered the 'dead' state. Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.081279379Z" level=info msg="runSandbox: deleting pod ID baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873 from idIndex" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.081305612Z" level=info msg="runSandbox: removing pod sandbox baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.081321737Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.081336455Z" level=info msg="runSandbox: unmounting shmPath for sandbox baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.102417037Z" level=info msg="runSandbox: removing pod sandbox from storage: baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.105282404Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.105308986Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=4524d763-0a70-420f-8ce8-4f3cab24f524 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:05.105478 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:05.105535 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:05.105558 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:05.105607 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(baa56ea0fc93cec9eb8c79278cc8a2b06ecdf21cc03ed73140c0536cdc26b873): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:05.995487 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:48:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:05.995630 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.995821323Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.996004904Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.996041118Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:05.996076899Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:06.015247953Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/eedcebbb-2d29-41a8-b7e2-6bbb77106c38 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:06.015278540Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:06.015983651Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/76faa239-de23-4161-8354-59cc9d01808d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:06.016008248Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:07.996639 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:07.996995066Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:07.997036153Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.008020767Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/10a156cd-0562-40e8-9574-3db200f86358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.008043980Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:08.995942 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:08.996219 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.996272685Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.996321591Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.996565652Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:08.996600084Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.012575562Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/1bf44177-7e4f-40d8-b649-8f0faef79344 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.012596078Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.012661120Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/2226d58a-275d-4525-a1b9-cb092ec2eb98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.012687154Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818471260Z" level=info msg="NetworkStart: stopping network for sandbox 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818472031Z" level=info msg="NetworkStart: stopping network for sandbox f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818614793Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/092be225-6080-45ff-ac35-423f89bedb4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818638876Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818645808Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818652257Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818663685Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/1648f6df-7440-4ead-9369-22361948ec13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818690989Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818700444Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.818708470Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.825933174Z" level=info msg="NetworkStart: stopping network for sandbox 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.826053172Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/40c068be-3ac7-464b-b68a-f94ac7e4ec7e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.826072780Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.826079227Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.826085806Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830359347Z" level=info msg="NetworkStart: stopping network for sandbox 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830435940Z" level=info msg="NetworkStart: stopping network for sandbox 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830468591Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/e41b8a8a-60e6-4cf0-8f33-5c40ac617917 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830492638Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830499609Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830506210Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830554730Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/31cd4f49-aed8-4aa1-a821-64dced626435 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830580328Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830587525Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.830593911Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:09.996095 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:48:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:09.996210 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.996472817Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.996507512Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.996586662Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:09.996614517Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.010874582Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b6e739a9-9e0d-499e-b855-d8ec83f8fe60 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.010893201Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.012236814Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bb93cda0-3605-4535-b9be-2f96abd51e86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.012254479Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:10.996423 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.996692931Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:10.996739562Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:11.007468200Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/cebff773-fd95-4770-825c-7053650adb15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:11.007491025Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:11.997350 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:48:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:11.997894 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:48:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:13.995861 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:48:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:13.996203212Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:13.996269062Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:14.008268231Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ac49afe0-f35e-43cf-bf08-8431591ed91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:14.008294029Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:15.996188 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:48:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:15.996638555Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:15.996692202Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:16.007932159Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/798410dd-e837-4d3e-bdf4-1f8add272cbc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:16.007952120Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:23.996863 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:23.997422 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872566 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872585 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872592 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872598 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872605 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872611 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:27.872619 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:48:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:28.142825870Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:48:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:36.996993 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:48:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:36.997543 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:48:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:42.020996956Z" level=info msg="NetworkStart: stopping network for sandbox 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:42.021140356Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/08cf681a-21ed-4263-ab59-93c7b8d8c9fd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:42.021164907Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:42.021172102Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:42.021179388Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:48.021183611Z" level=info msg="NetworkStart: stopping network for sandbox 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:48.021346948Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/d18c4304-5ea5-4aa9-908b-1d846439594c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:48.021371705Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:48.021379399Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:48.021386212Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:49.997111 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:48:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:49.997785 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.027041285Z" level=info msg="NetworkStart: stopping network for sandbox 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.027239963Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/eedcebbb-2d29-41a8-b7e2-6bbb77106c38 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.027277255Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.027287908Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.027298540Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.029438391Z" level=info msg="NetworkStart: stopping network for sandbox 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.029618353Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/76faa239-de23-4161-8354-59cc9d01808d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.029649668Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.029657515Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:51.029664459Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:53.021775246Z" level=info msg="NetworkStart: stopping network for sandbox b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:53.021915053Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/10a156cd-0562-40e8-9574-3db200f86358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:53.021938305Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:53.021946455Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:53.021952030Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026015929Z" level=info msg="NetworkStart: stopping network for sandbox 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026160049Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/2226d58a-275d-4525-a1b9-cb092ec2eb98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026185112Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026191827Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026197759Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026299087Z" level=info msg="NetworkStart: stopping network for sandbox 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026444757Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/1bf44177-7e4f-40d8-b649-8f0faef79344 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026468025Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026478402Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.026484661Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.829733725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.829772417Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.829784187Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.829817767Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.835805239Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.835839375Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.840045580Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.840073286Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.841974257Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.842016409Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount has successfully entered the 'dead' state. Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875418547Z" level=info msg="runSandbox: deleting pod ID 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae from idIndex" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875423405Z" level=info msg="runSandbox: deleting pod ID 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb from idIndex" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875472126Z" level=info msg="runSandbox: removing pod sandbox 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875485847Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875498005Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875423413Z" level=info msg="runSandbox: deleting pod ID f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7 from idIndex" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875585420Z" level=info msg="runSandbox: removing pod sandbox f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875599906Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875610810Z" level=info msg="runSandbox: unmounting shmPath for sandbox f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875617878Z" level=info msg="runSandbox: removing pod sandbox 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875647757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.875662094Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879286107Z" level=info msg="runSandbox: deleting pod ID 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e from idIndex" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879309598Z" level=info msg="runSandbox: removing pod sandbox 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879325909Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879336965Z" level=info msg="runSandbox: unmounting shmPath for sandbox 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879312345Z" level=info msg="runSandbox: deleting pod ID 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4 from idIndex" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879409481Z" level=info msg="runSandbox: removing pod sandbox 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879424001Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.879436462Z" level=info msg="runSandbox: unmounting shmPath for sandbox 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.887440587Z" level=info msg="runSandbox: removing pod sandbox from storage: 84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.887463260Z" level=info msg="runSandbox: removing pod sandbox from storage: 61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.888432139Z" level=info msg="runSandbox: removing pod sandbox from storage: f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.892552782Z" level=info msg="runSandbox: removing pod sandbox from storage: 144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.892640252Z" level=info msg="runSandbox: removing pod sandbox from storage: 813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.893386848Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.893419326Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=25312ff9-14d5-440b-83cd-63f06b4d0b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.893742 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.893798 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.893826 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.893885 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.897804127Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.897823596Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3730e50b-00eb-4807-bf08-a596eab9c7b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.898096 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.898143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.898172 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.898230 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.900754628Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.900770298Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=a70ebebf-baa9-4e97-9982-9d8c37987733 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.900979 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.901014 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.901035 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.901074 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.903751865Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.903768102Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=6b29b59a-3da7-46a9-8141-5166889577bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.903991 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.904028 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.904053 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.904102 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.906680790Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.906698001Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8a639551-d58d-4b48-9fef-a1a0922d2886 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.906804 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.906840 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.906865 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:48:54.906913 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:54.960776 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:54.960888 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:54.961065 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961132771Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961164374Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:54.961150 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961256412Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961285128Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961398386Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961424126Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961498165Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961513715Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961538501Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:48:54.961312 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.961564379Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.987232002Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/db334a8a-a61c-4629-a460-ff32288d59c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.987252850Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.987816582Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3bbc2260-f80e-4597-a707-07e8746602fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.987838104Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.989506366Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2e95d957-6889-4699-8259-1a00c0542003 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.989526509Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.991121491Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/59c050d3-8769-422f-9c42-96a2d5e8769e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.991142891Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.991847581Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/daae6c89-051d-4fff-91d1-4b9ed6e55c16 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:54.991865334Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.022887893Z" level=info msg="NetworkStart: stopping network for sandbox e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.023025309Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b6e739a9-9e0d-499e-b855-d8ec83f8fe60 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.023049310Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.023056174Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.023061870Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.024602882Z" level=info msg="NetworkStart: stopping network for sandbox a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.024709102Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bb93cda0-3605-4535-b9be-2f96abd51e86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.024731069Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.024737350Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:55.024743526Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e41b8a8a\x2d60e6\x2d4cf0\x2d8f33\x2d5c40ac617917.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-31cd4f49\x2daed8\x2d4aa1\x2da821\x2d64dced626435.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-813be68542c7864941562b7e9c7a9b8aee8bc37e7871831c665f752c1c62105e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-40c068be\x2d3ac7\x2d464b\x2db68a\x2df94ac7e4ec7e.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-144bdee15e3b9810f5a1eae5925adbfb36e836db4ae0b7a52ad8543750567cf4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-092be225\x2d6080\x2d45ff\x2dac35\x2d423f89bedb4b.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1648f6df\x2d7440\x2d4ead\x2d9369\x2d22361948ec13.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-84667551d47822aab5eafe773d3bbad57a279600f0b3ac7c6454285ef3f484eb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-61c71ff9cb8096ae1b3fc562b6da3f0b46f3ecfc09381cc7880974781e4f62ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f67cc698e30871f9e24ea1de8db8aa35153bbaa8f8436b6679fc0504beba7fb7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:56.020052427Z" level=info msg="NetworkStart: stopping network for sandbox 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:56.020353681Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/cebff773-fd95-4770-825c-7053650adb15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:56.020377516Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:56.020383890Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:56.020391434Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:48:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:58.146499163Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:48:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:59.021020510Z" level=info msg="NetworkStart: stopping network for sandbox 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:48:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:59.021162082Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ac49afe0-f35e-43cf-bf08-8431591ed91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:48:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:59.021184178Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:48:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:59.021191559Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:48:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:48:59.021197198Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:01.020469684Z" level=info msg="NetworkStart: stopping network for sandbox a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:01.020627537Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/798410dd-e837-4d3e-bdf4-1f8add272cbc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:01.020655867Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:01.020664026Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:01.020670341Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:02.996186 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:49:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:02.996931 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:49:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:17.997507 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:49:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:17.998025 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.031912937Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.031954557Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount has successfully entered the 'dead' state. Jan 23 16:49:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount has successfully entered the 'dead' state. Jan 23 16:49:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-08cf681a\x2d21ed\x2d4263\x2dab59\x2d93c7b8d8c9fd.mount has successfully entered the 'dead' state. Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.081309578Z" level=info msg="runSandbox: deleting pod ID 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54 from idIndex" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.081335317Z" level=info msg="runSandbox: removing pod sandbox 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.081349781Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.081362131Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.097436981Z" level=info msg="runSandbox: removing pod sandbox from storage: 0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.100735457Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:27.100756749Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=1c69d4ec-f330-4987-84b6-6c957636a5d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:27.100992 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:27.101040 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:27.101065 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:27.101122 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0554841e15d3a5aa02dad553ebc404077ba698bd24b50fab14c8d84bf3e04a54): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872886 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872907 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872915 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872923 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872930 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872936 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:27.872945 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:49:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:28.146578146Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:49:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:28.996873 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:49:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:28.997382 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.034286619Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.034327048Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount has successfully entered the 'dead' state. Jan 23 16:49:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount has successfully entered the 'dead' state. Jan 23 16:49:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d18c4304\x2d5ea5\x2d4aa9\x2d908b\x2d1d846439594c.mount has successfully entered the 'dead' state. Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.088305194Z" level=info msg="runSandbox: deleting pod ID 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d from idIndex" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.088346190Z" level=info msg="runSandbox: removing pod sandbox 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.088362847Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.088376104Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.113419792Z" level=info msg="runSandbox: removing pod sandbox from storage: 4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.116874949Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:33.116892903Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=ddb7c89d-6739-4d13-93eb-2921fc175cd2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:33.117146 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:33.117192 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:49:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:33.117217 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:49:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:33.117261 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(4b528941a22b64319f97b9b0906768be39ed5f519ee275f0476c6e0d68730b8d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.037774754Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.037836398Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.040928786Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.040968594Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount has successfully entered the 'dead' state. Jan 23 16:49:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount has successfully entered the 'dead' state. Jan 23 16:49:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount has successfully entered the 'dead' state. Jan 23 16:49:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount has successfully entered the 'dead' state. Jan 23 16:49:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-eedcebbb\x2d2d29\x2d41a8\x2db7e2\x2d6bbb77106c38.mount has successfully entered the 'dead' state. Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.080309250Z" level=info msg="runSandbox: deleting pod ID 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74 from idIndex" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.080337628Z" level=info msg="runSandbox: removing pod sandbox 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.080361349Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.080387154Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.088301077Z" level=info msg="runSandbox: deleting pod ID 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d from idIndex" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.088330694Z" level=info msg="runSandbox: removing pod sandbox 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.088345145Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.088359818Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.100440924Z" level=info msg="runSandbox: removing pod sandbox from storage: 4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.104158525Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.104180900Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1bcbd19d-56b0-4c79-aff8-55f74e801825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.104442 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.104483 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.104504 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.104554 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.108419476Z" level=info msg="runSandbox: removing pod sandbox from storage: 6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.111740830Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:36.111760377Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ffa2b6c2-f996-477e-adc0-95b0e102df15 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.111947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.111991 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.112015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:36.112066 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:49:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-76faa239\x2dde23\x2d4161\x2d8354\x2d59cc9d01808d.mount has successfully entered the 'dead' state. Jan 23 16:49:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6849e89fc8ddb53d53a77c5d0540532cbb2de2e429d1d4269806492e6648521d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4173d9867593ec52bc5bbde5c7b61c6024fb42678998b815edd6cce76c826d74-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.033017767Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.033054450Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount has successfully entered the 'dead' state. Jan 23 16:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount has successfully entered the 'dead' state. Jan 23 16:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-10a156cd\x2d0562\x2d40e8\x2d9574\x2d3db200f86358.mount has successfully entered the 'dead' state. Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.077303203Z" level=info msg="runSandbox: deleting pod ID b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a from idIndex" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.077331604Z" level=info msg="runSandbox: removing pod sandbox b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.077346839Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.077361369Z" level=info msg="runSandbox: unmounting shmPath for sandbox b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.097434113Z" level=info msg="runSandbox: removing pod sandbox from storage: b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.101008944Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:38.101028331Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=f2e96a27-eb4e-426d-9a4e-5f7a818b8df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:38.101252 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:38.101296 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:38.101319 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:38.101366 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b220ca6e073c7178a513375cc41cec7a12a636a1db5d71a6b61e0c2a2122e96a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.036290057Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.036327655Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.038107990Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.038150779Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2226d58a\x2d275d\x2d4525\x2da1b9\x2dcb092ec2eb98.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1bf44177\x2d7e4f\x2d40d8\x2db649\x2d8f0faef79344.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.087310726Z" level=info msg="runSandbox: deleting pod ID 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089 from idIndex" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.087337130Z" level=info msg="runSandbox: removing pod sandbox 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.087349742Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.087361062Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.088383476Z" level=info msg="runSandbox: deleting pod ID 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77 from idIndex" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.088411787Z" level=info msg="runSandbox: removing pod sandbox 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.088430590Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.088452001Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.099410501Z" level=info msg="runSandbox: removing pod sandbox from storage: 0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.102812786Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.102832703Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9ef6f2cd-50a6-44ac-a46f-ed4a121aa27c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.102959 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.103001 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.103024 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.103071 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(0c32043f918294b61a42eceab2d08112a80f4670175c92c52e2875d63f5f6b77): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.107424504Z" level=info msg="runSandbox: removing pod sandbox from storage: 1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.110923222Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.110943881Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a9ccae6e-400b-4ac6-9938-927fb8886f0f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.111148 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.111182 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.111203 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:39.111247 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(1522ad103144a3b9d7152804196ed632e0b6c3e024ac9ce615fdcd0ab8d79089): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:49:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:39.995903 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.996319181Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.996362129Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.999644724Z" level=info msg="NetworkStart: stopping network for sandbox d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.999772872Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/db334a8a-a61c-4629-a460-ff32288d59c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.999794914Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.999801533Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:39.999807334Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.001957517Z" level=info msg="NetworkStart: stopping network for sandbox bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.002056340Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3bbc2260-f80e-4597-a707-07e8746602fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.002075546Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.002082496Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.002087685Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003576655Z" level=info msg="NetworkStart: stopping network for sandbox aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003692067Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2e95d957-6889-4699-8259-1a00c0542003 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003714628Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003721324Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003728269Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003786500Z" level=info msg="NetworkStart: stopping network for sandbox 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003904659Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/59c050d3-8769-422f-9c42-96a2d5e8769e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003929404Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003936428Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003942959Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.003960287Z" level=info msg="NetworkStart: stopping network for sandbox 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.004083640Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/daae6c89-051d-4fff-91d1-4b9ed6e55c16 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.004108544Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.004116430Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.004123349Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.008842144Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a41f4a2d-8e2d-420d-94b7-be01542ed301 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.008866868Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.033492194Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.033522769Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.034579457Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.034609350Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6e739a9\x2d9e0d\x2d499e\x2db855\x2dd8ec83f8fe60.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.072285126Z" level=info msg="runSandbox: deleting pod ID e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de from idIndex" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.072310293Z" level=info msg="runSandbox: removing pod sandbox e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.072324187Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.072338445Z" level=info msg="runSandbox: unmounting shmPath for sandbox e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bb93cda0\x2d3605\x2d4535\x2db9be\x2d2f96abd51e86.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.080372747Z" level=info msg="runSandbox: deleting pod ID a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8 from idIndex" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.080396959Z" level=info msg="runSandbox: removing pod sandbox a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.080409841Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.080421797Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.096441524Z" level=info msg="runSandbox: removing pod sandbox from storage: a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.096451985Z" level=info msg="runSandbox: removing pod sandbox from storage: e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.099651631Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.099671235Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=622fbeab-cbc7-4647-9f9b-6cedda8d0d43 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.099879 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.100042 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.100066 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.100114 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e8039bc35ba70e7a4f284631bcc6bba031aed1ca02135ef3e2df46ae1a9a33de): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.102794914Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:40.102813542Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=6f94dc5d-c1f3-4b4e-8ee4-ffedaefc0e58 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.102974 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.103008 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.103036 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:40.103072 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a7da1c1d10846ebb42614d61b1eaa3cac9f73e6cf26fb6cf2c4b7b1500912ed8): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.030855946Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.030889905Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount has successfully entered the 'dead' state. Jan 23 16:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount has successfully entered the 'dead' state. Jan 23 16:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cebff773\x2dfd95\x2d4770\x2d825c\x2d7053650adb15.mount has successfully entered the 'dead' state. Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.065285492Z" level=info msg="runSandbox: deleting pod ID 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd from idIndex" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.065311467Z" level=info msg="runSandbox: removing pod sandbox 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.065324659Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.065336278Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.081483004Z" level=info msg="runSandbox: removing pod sandbox from storage: 1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.084847331Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:41.084866299Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=86e309f8-0178-4197-bad9-c6368bd71653 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:41.084968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:41.085014 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:41.085035 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:41.085077 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1235aff69c761735d5b0a6cbd84f361e8d6ef241a652fe77af3d6d31e8d221bd): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:41.996958 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:41.997451 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.032196059Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.032240187Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount has successfully entered the 'dead' state. Jan 23 16:49:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount has successfully entered the 'dead' state. Jan 23 16:49:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ac49afe0\x2df35e\x2d43cf\x2dbf08\x2d8431591ed91c.mount has successfully entered the 'dead' state. Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.091310818Z" level=info msg="runSandbox: deleting pod ID 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9 from idIndex" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.091334292Z" level=info msg="runSandbox: removing pod sandbox 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.091348394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.091361314Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.107440144Z" level=info msg="runSandbox: removing pod sandbox from storage: 2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.110893581Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.110911953Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=893b20e5-79e1-49aa-a5a0-ed882de5342e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:44.111096 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:44.111141 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:49:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:44.111164 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:49:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:44.111215 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2bc67707a683fee45692b0f06d77be93845819f81218820bf31ababe61fdd0b9): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:49:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:44.995697 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.996089758Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:44.996328018Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:45.012788315Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/cc3f0707-71bd-4fdb-8cd9-2291efa66ac7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:45.012829443Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.031911262Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.031958745Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount has successfully entered the 'dead' state. Jan 23 16:49:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount has successfully entered the 'dead' state. Jan 23 16:49:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-798410dd\x2de837\x2d4d3e\x2dbdf4\x2d1f8add272cbc.mount has successfully entered the 'dead' state. Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.082312476Z" level=info msg="runSandbox: deleting pod ID a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed from idIndex" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.082344021Z" level=info msg="runSandbox: removing pod sandbox a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.082360497Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.082374930Z" level=info msg="runSandbox: unmounting shmPath for sandbox a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.102431197Z" level=info msg="runSandbox: removing pod sandbox from storage: a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.105472345Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:46.105490439Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ac5d2a78-99e9-4855-bd9b-7a8c5efc7300 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:46.105738 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:49:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:46.105778 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:46.105798 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:46.105848 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a5e47f246a73dfb26336d393bda84e69349dfeb98f95cefbacd3f3f9c9417aed): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:49:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:47.996714 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:47.997145053Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:47.997200427Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:48.010167303Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/cb0fcb58-46bf-475b-92a4-bab0de6eb128 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:48.010196436Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:49.996590 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:49:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:49.996980571Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:49.997038980Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:50.007708326Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/95602a98-2d24-496c-b379-9176a85c4bda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:50.007728951Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:50.996085 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:50.996484545Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:50.996528007Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:51.007018926Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/10a045c0-71fe-455a-881a-4cacb3ae10c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:51.007041534Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:51.996311 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:51.996663557Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:51.996708481Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:52.008173021Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/97e06d19-5346-4310-bdde-59351f47cd69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:52.008196460Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:52.996040 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:52.996592891Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:52.996638761Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:53.008158610Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/46f883d8-6232-42b3-929f-87d524e290c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:53.008180742Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:53.996305 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:53.996647335Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:53.996700068Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.007420070Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b98efc71-895a-4782-9212-964c1b7aa06c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.007446675Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:54.995576 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:54.995723 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.995883475Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.995929082Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.996012433Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:54.996056502Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:55.011313547Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/784c001f-f3ed-49a0-ab1d-46cf0c9ee5f0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:55.011340164Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:55.013524898Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/367ff175-5a76-495b-9a8a-455817507920 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:55.013549916Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:56.997170 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:49:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:49:56.997685 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:49:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:57.996437 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:49:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:49:57.996588 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:57.996781758Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:57.996826846Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:57.996963798Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:57.997013362Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:58.012188103Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/77538b8b-1825-4f99-a4d1-4476e073073b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:58.012217950Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:58.012599377Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/3f8fe429-b39b-4eb2-b2d5-961ae0ecb9b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:58.012620743Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:49:58.143921524Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:50:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:08.001173 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:50:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:08.001688 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:50:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:19.996597 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:50:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:19.997238 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.010682340Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.010941685Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.013012211Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.013044224Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.014513617Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.014539112Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.014572605Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.014545922Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.015204035Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.015245224Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.022508036Z" level=info msg="NetworkStart: stopping network for sandbox 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.022656354Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a41f4a2d-8e2d-420d-94b7-be01542ed301 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.022684358Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.022691372Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.022699397Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount has successfully entered the 'dead' state. Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065344576Z" level=info msg="runSandbox: deleting pod ID bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde from idIndex" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065375254Z" level=info msg="runSandbox: removing pod sandbox bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065389570Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065402552Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065345613Z" level=info msg="runSandbox: deleting pod ID d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97 from idIndex" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065462402Z" level=info msg="runSandbox: removing pod sandbox d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065476009Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.065487746Z" level=info msg="runSandbox: unmounting shmPath for sandbox d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069315297Z" level=info msg="runSandbox: deleting pod ID aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61 from idIndex" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069334506Z" level=info msg="runSandbox: deleting pod ID 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4 from idIndex" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069361538Z" level=info msg="runSandbox: removing pod sandbox 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069374370Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069387931Z" level=info msg="runSandbox: unmounting shmPath for sandbox 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069338325Z" level=info msg="runSandbox: deleting pod ID 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7 from idIndex" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069437309Z" level=info msg="runSandbox: removing pod sandbox 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069452541Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069466325Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069347780Z" level=info msg="runSandbox: removing pod sandbox aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069634381Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.069650000Z" level=info msg="runSandbox: unmounting shmPath for sandbox aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.077447226Z" level=info msg="runSandbox: removing pod sandbox from storage: d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.078426121Z" level=info msg="runSandbox: removing pod sandbox from storage: bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.080657502Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.080677010Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=2eb85c19-804b-4a85-8f6a-4968b5f34ed8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.080918 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.080971 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.080995 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.081049 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.083925562Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.083944466Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=76c2983f-0db7-4994-9918-7a2cea25ef9a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.084138 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.084172 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.084193 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.084236 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.085455976Z" level=info msg="runSandbox: removing pod sandbox from storage: aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.085481867Z" level=info msg="runSandbox: removing pod sandbox from storage: 90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.085601637Z" level=info msg="runSandbox: removing pod sandbox from storage: 8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.088741849Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.088760634Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5a8853f3-4dda-4676-bea9-d6cd9b0a4e92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.088936 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.088997 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.089038 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.089103 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.091726720Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.091743560Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f935944a-fd35-46c8-b340-a68990a322ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.091947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.091980 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.092000 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.092037 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.094760117Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.094776324Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e00c7aef-7d6b-45fe-b098-dd6b9bc1947b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.094890 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.094940 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.094977 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:25.095034 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:25.128531 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:25.128552 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:25.128763 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:25.128881 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.128894105Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.128923699Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.128987347Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129017331Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:50:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:25.129021 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129073052Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129100037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129229017Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129244449Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129245053Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.129294364Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.156367376Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/373bedf6-ec97-4a03-ac98-92bd5054cba1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.156392626Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.156518007Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/df60da2a-5f41-47e2-b93f-79c811b60a68 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.156538479Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.161326188Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/32f046f2-adb0-4ae5-8f9a-26cd0a5d7aef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.161348862Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.162305948Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/08f14945-4d22-4cd3-9f3a-9d46d77dcdfe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.162329235Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.163999598Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/35bda630-a4b7-40c5-a941-6692db334c05 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:25.164023588Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-daae6c89\x2d051d\x2d4fff\x2d91d1\x2d4b9ed6e55c16.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-59c050d3\x2d8769\x2d422f\x2d9c42\x2d96a2d5e8769e.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2e95d957\x2d6889\x2d4699\x2d8259\x2d1a00c0542003.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3bbc2260\x2df80e\x2d4597\x2da707\x2d07e8746602fe.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-db334a8a\x2da61c\x2d4629\x2da460\x2dff32288d59c8.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-90eeb9e2d58ef7cbdf5829117acf42b37aedba3c645fee2ca32c4883f3d7a4e4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bb0da9e55d1287d91bebd880e9b712695cbf53871e1b2037e44fca9ec68cfcde-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8ab4085846ed1d19016b55fc9791f80c906d00f75455e7a3397ce2e5bee9a0d7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aef53f222f6456f6367c179c2aa03b7bd82d25a067d63bdec52342be3e348b61-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:50:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d1a04c0e49fefe4b2a8d1dbd57c318c2db691f40474023270038a0e80539eb97-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873349 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873368 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873375 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873381 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873387 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873394 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:27.873401 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:50:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:27.878775435Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=b7ad3587-499b-4a7c-a9ac-ab11ab19db08 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:50:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:27.878890628Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b7ad3587-499b-4a7c-a9ac-ab11ab19db08 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:50:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:28.142649851Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:50:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:30.025095745Z" level=info msg="NetworkStart: stopping network for sandbox 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:30.025241562Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/cc3f0707-71bd-4fdb-8cd9-2291efa66ac7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:30.025266158Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:30.025274314Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:30.025281287Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:32.996841 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:32.997522 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:50:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:33.022136500Z" level=info msg="NetworkStart: stopping network for sandbox 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:33.022322686Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/cb0fcb58-46bf-475b-92a4-bab0de6eb128 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:33.022344974Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:33.022351623Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:33.022359954Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:35.020313370Z" level=info msg="NetworkStart: stopping network for sandbox 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:35.020453796Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/95602a98-2d24-496c-b379-9176a85c4bda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:35.020476751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:35.020483289Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:35.020489401Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:36.018953208Z" level=info msg="NetworkStart: stopping network for sandbox 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:36.019098490Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/10a045c0-71fe-455a-881a-4cacb3ae10c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:36.019125392Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:36.019132246Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:36.019140038Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:37.021361927Z" level=info msg="NetworkStart: stopping network for sandbox 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:37.021513094Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/97e06d19-5346-4310-bdde-59351f47cd69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:37.021538936Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:37.021546257Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:37.021552869Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:38.020622265Z" level=info msg="NetworkStart: stopping network for sandbox 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:38.020774549Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/46f883d8-6232-42b3-929f-87d524e290c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:38.020799682Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:38.020806757Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:38.020813018Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:39.020467641Z" level=info msg="NetworkStart: stopping network for sandbox 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:39.020634473Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b98efc71-895a-4782-9212-964c1b7aa06c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:39.020660583Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:39.020668511Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:39.020676107Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.024138071Z" level=info msg="NetworkStart: stopping network for sandbox 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.024281779Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/784c001f-f3ed-49a0-ab1d-46cf0c9ee5f0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.024303365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.024310246Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.024316725Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.026748970Z" level=info msg="NetworkStart: stopping network for sandbox 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.026866603Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/367ff175-5a76-495b-9a8a-455817507920 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.026887250Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.026894199Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:40.026899751Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026609963Z" level=info msg="NetworkStart: stopping network for sandbox a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026627200Z" level=info msg="NetworkStart: stopping network for sandbox 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026756342Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/3f8fe429-b39b-4eb2-b2d5-961ae0ecb9b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026778354Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026784986Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026791255Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026804902Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/77538b8b-1825-4f99-a4d1-4476e073073b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026827683Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026834796Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:50:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:43.026842169Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:50:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:44.996837 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:50:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:44.997353 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:50:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:50:58.143735287Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:50:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:50:59.997038 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:50:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:50:59.997595 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1190] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1196] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1197] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1199] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1204] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492668.1209] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:51:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492669.5874] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.033434130Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.033625847Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount has successfully entered the 'dead' state. Jan 23 16:51:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount has successfully entered the 'dead' state. Jan 23 16:51:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a41f4a2d\x2d8e2d\x2d420d\x2d94b7\x2dbe01542ed301.mount has successfully entered the 'dead' state. Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.077467237Z" level=info msg="runSandbox: deleting pod ID 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e from idIndex" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.077503911Z" level=info msg="runSandbox: removing pod sandbox 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.077521662Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.077535080Z" level=info msg="runSandbox: unmounting shmPath for sandbox 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.089489743Z" level=info msg="runSandbox: removing pod sandbox from storage: 56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.092622551Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.092642161Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=00f8dea6-0773-4ea4-8001-1eeabf782f60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:10.092968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:10.093082 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:10.093107 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:10.093168 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(56af372fd091ff48270cd211a6e22777ce08ed0cf7ceb58d4e1cf56099e6ce4e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169587091Z" level=info msg="NetworkStart: stopping network for sandbox 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169642430Z" level=info msg="NetworkStart: stopping network for sandbox d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169733165Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/df60da2a-5f41-47e2-b93f-79c811b60a68 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169756255Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169763379Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169766562Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/373bedf6-ec97-4a03-ac98-92bd5054cba1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169792633Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169800743Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169807545Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.169769812Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.174349834Z" level=info msg="NetworkStart: stopping network for sandbox 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.174471386Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/32f046f2-adb0-4ae5-8f9a-26cd0a5d7aef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.174495400Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.174504107Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.174510231Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175008917Z" level=info msg="NetworkStart: stopping network for sandbox 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175178036Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/08f14945-4d22-4cd3-9f3a-9d46d77dcdfe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175230532Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175245433Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175258013Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175577314Z" level=info msg="NetworkStart: stopping network for sandbox 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175679331Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/35bda630-a4b7-40c5-a941-6692db334c05 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175700844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175707170Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:51:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:10.175712852Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:14.996962 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:51:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:14.997518 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.036057017Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.036096820Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount has successfully entered the 'dead' state. Jan 23 16:51:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount has successfully entered the 'dead' state. Jan 23 16:51:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cc3f0707\x2d71bd\x2d4fdb\x2d8cd9\x2d2291efa66ac7.mount has successfully entered the 'dead' state. Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.084312623Z" level=info msg="runSandbox: deleting pod ID 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1 from idIndex" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.084337987Z" level=info msg="runSandbox: removing pod sandbox 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.084352384Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.084370505Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.100421748Z" level=info msg="runSandbox: removing pod sandbox from storage: 9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.106752565Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:15.106783234Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a02fa4aa-45d4-46ca-a1de-d6e002fb70f8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:15.107026 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:15.107065 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:15.107086 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:15.107125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9a70d925d8a2b12a2bd1a0b6e4e7c391d63fa363bb16d9d8a237632a034d50b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.033279837Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.033320365Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount has successfully entered the 'dead' state. Jan 23 16:51:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount has successfully entered the 'dead' state. Jan 23 16:51:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cb0fcb58\x2d46bf\x2d475b\x2d92a4\x2dbab0de6eb128.mount has successfully entered the 'dead' state. Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.082309578Z" level=info msg="runSandbox: deleting pod ID 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7 from idIndex" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.082339191Z" level=info msg="runSandbox: removing pod sandbox 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.082361111Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.082385753Z" level=info msg="runSandbox: unmounting shmPath for sandbox 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.094468180Z" level=info msg="runSandbox: removing pod sandbox from storage: 90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.098043844Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:18.098063781Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=aaf462e2-02de-4ff0-a9fc-cb502cfcdf38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:18.098307 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:18.098350 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:18.098375 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:18.098424 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(90d49daca84c017f08d33926a2debb27bb84e3778d7af8f5f3cf4634a5bf1ea7): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.031899044Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.031935372Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount has successfully entered the 'dead' state. Jan 23 16:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount has successfully entered the 'dead' state. Jan 23 16:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-95602a98\x2d2d24\x2d496c\x2db379\x2d9176a85c4bda.mount has successfully entered the 'dead' state. Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.090313275Z" level=info msg="runSandbox: deleting pod ID 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf from idIndex" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.090340089Z" level=info msg="runSandbox: removing pod sandbox 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.090355515Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.090367897Z" level=info msg="runSandbox: unmounting shmPath for sandbox 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.111476838Z" level=info msg="runSandbox: removing pod sandbox from storage: 56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.115011687Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.115029694Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=0a979a49-db78-4627-b200-9c7cc2d12a36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:20.115254 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:20.115300 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:20.115323 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:20.115373 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(56ce4810682a23f9cfa4ffd6fd733855122bd2187bc282426bcfed6c68951dcf): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:20.996056 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.996378940Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:20.996421156Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.008374313Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/79716a93-7598-4337-a772-1cb5fc1a0638 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.008560283Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.029578793Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.029622299Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount has successfully entered the 'dead' state. Jan 23 16:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount has successfully entered the 'dead' state. Jan 23 16:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-10a045c0\x2d71fe\x2d455a\x2d881a\x2d4cacb3ae10c8.mount has successfully entered the 'dead' state. Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.076277116Z" level=info msg="runSandbox: deleting pod ID 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee from idIndex" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.076307170Z" level=info msg="runSandbox: removing pod sandbox 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.076323148Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.076338631Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.092420400Z" level=info msg="runSandbox: removing pod sandbox from storage: 1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.095183952Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:21.095202689Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=dff817d9-5016-4075-9040-4760f351b465 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:21.095444 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:21.095487 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:21.095509 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:21.095552 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(1915bf35ad8a673a2cacf923512341d186708f80d1af03127f8c51ead2bf6aee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.033263285Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.033305483Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount has successfully entered the 'dead' state. Jan 23 16:51:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount has successfully entered the 'dead' state. Jan 23 16:51:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-97e06d19\x2d5346\x2d4310\x2dbdde\x2d59351f47cd69.mount has successfully entered the 'dead' state. Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.076298618Z" level=info msg="runSandbox: deleting pod ID 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709 from idIndex" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.076329083Z" level=info msg="runSandbox: removing pod sandbox 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.076347114Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.076361399Z" level=info msg="runSandbox: unmounting shmPath for sandbox 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.100441661Z" level=info msg="runSandbox: removing pod sandbox from storage: 688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.103797598Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:22.103818379Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ca67fe9c-cd9c-49fd-9a4d-0d5fb15c7762 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:22.103944 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:22.103989 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:51:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:22.104012 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:51:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:22.104062 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(688c570f38e492a8c518c4693a8eb4066405f5d9a19e0fbea48a3dc10c404709): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.031418428Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.031459115Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount has successfully entered the 'dead' state. Jan 23 16:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount has successfully entered the 'dead' state. Jan 23 16:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-46f883d8\x2d6232\x2d42b3\x2d929f\x2d87d524e290c1.mount has successfully entered the 'dead' state. Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.078321701Z" level=info msg="runSandbox: deleting pod ID 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc from idIndex" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.078348925Z" level=info msg="runSandbox: removing pod sandbox 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.078364529Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.078377039Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.099438772Z" level=info msg="runSandbox: removing pod sandbox from storage: 47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.102919080Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:23.102938221Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=aff259b2-9c22-4fad-9a7c-8fa8a1896f9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:23.103069 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:23.103113 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:23.103138 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:23.103185 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(47125492ab540cc351ace72d424420c9b1ccf0a4ab025cc21fd886ce61b91bcc): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.031684943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.031730805Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount has successfully entered the 'dead' state. Jan 23 16:51:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount has successfully entered the 'dead' state. Jan 23 16:51:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b98efc71\x2d895a\x2d4782\x2d9212\x2d964c1b7aa06c.mount has successfully entered the 'dead' state. Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.071318806Z" level=info msg="runSandbox: deleting pod ID 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb from idIndex" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.071346317Z" level=info msg="runSandbox: removing pod sandbox 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.071362289Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.071375824Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.095443946Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.098786561Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:24.098805632Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=34e51d65-28ba-4ba9-a234-943d6b63d579 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:24.099030 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:24.099075 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:24.099100 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:24.099151 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(0b2e38f9334eb59202f7e2879cc96f0f107a14f4b59726618553209c7ab5bdcb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.034948889Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.034992172Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.037843684Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.037872801Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-784c001f\x2df3ed\x2d49a0\x2dab1d\x2d46cf0c9ee5f0.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-367ff175\x2d5a76\x2d495b\x2d9a8a\x2d455817507920.mount has successfully entered the 'dead' state. Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.084303172Z" level=info msg="runSandbox: deleting pod ID 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed from idIndex" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.084328331Z" level=info msg="runSandbox: removing pod sandbox 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.084344405Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.084355739Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.088301552Z" level=info msg="runSandbox: deleting pod ID 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee from idIndex" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.088327437Z" level=info msg="runSandbox: removing pod sandbox 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.088340396Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.088352781Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.096439133Z" level=info msg="runSandbox: removing pod sandbox from storage: 46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.099833562Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.099852665Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=66064306-7aae-48c8-922c-2d312ef297c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.100073 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.100124 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.100150 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.100215 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.104461723Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.107614697Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.107633247Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=6581389b-ff81-4ae0-9d4b-d58ab50bb781 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.107821 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.107861 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.107885 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:25.107932 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:25.996079 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.996486436Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:25.996524705Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:26.007617749Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/22917cae-9ac2-4a5c-b7e1-ad76884f1467 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:26.007637450Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0b605092e8dad2ecb33447314296184a25bbe8aaf762748ad7c3da7b32a804ee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-46230b79b5a2f0ce271694beac6e4ba851ed63f95c1a9172865a61dd0add49ed-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:26.996518 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:26.997020 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873588 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873610 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873617 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873624 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873630 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873637 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:27.873645 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.038295438Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.038337864Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.038674665Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.038709952Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount has successfully entered the 'dead' state. Jan 23 16:51:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount has successfully entered the 'dead' state. Jan 23 16:51:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount has successfully entered the 'dead' state. Jan 23 16:51:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount has successfully entered the 'dead' state. Jan 23 16:51:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-77538b8b\x2d1825\x2d4f99\x2da4d1\x2d4476e073073b.mount has successfully entered the 'dead' state. Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.094281605Z" level=info msg="runSandbox: deleting pod ID 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70 from idIndex" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.094307124Z" level=info msg="runSandbox: removing pod sandbox 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.094320151Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.094332592Z" level=info msg="runSandbox: unmounting shmPath for sandbox 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.103303441Z" level=info msg="runSandbox: deleting pod ID a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d from idIndex" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.103326385Z" level=info msg="runSandbox: removing pod sandbox a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.103338657Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.103352840Z" level=info msg="runSandbox: unmounting shmPath for sandbox a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.110435863Z" level=info msg="runSandbox: removing pod sandbox from storage: 776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.113439121Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.113457309Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=b58757fd-54d8-45ac-ad9c-384e8a443607 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.113690 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.113878 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.113901 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.113948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.118425192Z" level=info msg="runSandbox: removing pod sandbox from storage: a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.121665236Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.121685357Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=db22cc0f-9445-4eb5-82a3-701be37701d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.121856 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.121892 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.121916 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:28.121964 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:28.141931374Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:51:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3f8fe429\x2db39b\x2d4eb2\x2db2d5\x2d961ae0ecb9b3.mount has successfully entered the 'dead' state. Jan 23 16:51:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a9042ca3502764f95483e580bdc95b48aadc312b440cd9b3d6a550166a0e479d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-776b2ef4d642b06cfa5504247599ab5a24b0097024b2777add3ad1522b730b70-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:30.995876 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:30.996317564Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:30.996361851Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:31.009407247Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/15f50647-72d4-4a15-9c7b-90e9fba93e18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:31.009588080Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:33.996052 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:51:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:33.996347435Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:33.996388523Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:34.007533273Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b67d2957-1245-4f3f-90cb-6dc93faf2164 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:34.007556269Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:34.996118 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:34.996497945Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:34.996543055Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:35.011692395Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/98f75df3-3ac5-42d9-99cb-f2f0fdcdc28c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:35.011736558Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:36.996323 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:36.996388 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:51:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:36.996600 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:51:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:36.996738 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996734814Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996777980Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996825126Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996854520Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996903118Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996931888Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996941640Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:36.996956032Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.019188531Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/55665bf4-4f2b-41c3-9d3a-d7e19760feb2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.019213715Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.020858948Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3d8a99dc-4404-4ebe-8a3f-27164ee2b521 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.020884069Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.021390443Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/9de11bf5-4e32-4bb4-baba-ff78be78849c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.021411439Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.022324746Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/2e2e2df6-e46b-4975-8fa6-f63f0f572abc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:37.022359668Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:38.996450 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:38.996976 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:51:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:40.995759 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:51:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:40.995833 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:40.996218711Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:40.996255720Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:40.996316649Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:40.996346026Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:41.010533764Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/93b5f618-81f4-423b-ba75-3b2272ac6232 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:41.010554599Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:41.012244385Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9a52b14b-c0cb-4516-858e-e38df84cbf1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:41.012264269Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:42.995635 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:51:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:42.995944244Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:42.995981907Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:43.012452654Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/566b3290-3dc5-4659-b1c0-91dd18611609 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:43.012476845Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:50.996510 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:51:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:50.997011 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.180349723Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.180400290Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.180823410Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.180864594Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.184884723Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.184915901Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.185767782Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.185800040Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.186726918Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.186758328Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount has successfully entered the 'dead' state. Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.228381802Z" level=info msg="runSandbox: deleting pod ID 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3 from idIndex" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.228412917Z" level=info msg="runSandbox: removing pod sandbox 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.228463364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.228478622Z" level=info msg="runSandbox: unmounting shmPath for sandbox 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.229281170Z" level=info msg="runSandbox: deleting pod ID 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef from idIndex" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.229310502Z" level=info msg="runSandbox: removing pod sandbox 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.229324799Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.229337145Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236315673Z" level=info msg="runSandbox: deleting pod ID 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d from idIndex" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236342803Z" level=info msg="runSandbox: removing pod sandbox 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236357429Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236368394Z" level=info msg="runSandbox: unmounting shmPath for sandbox 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236318804Z" level=info msg="runSandbox: deleting pod ID d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74 from idIndex" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236426110Z" level=info msg="runSandbox: removing pod sandbox d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236438583Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.236450478Z" level=info msg="runSandbox: unmounting shmPath for sandbox d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.237308476Z" level=info msg="runSandbox: deleting pod ID 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189 from idIndex" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.237514692Z" level=info msg="runSandbox: removing pod sandbox 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.237527449Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.237539869Z" level=info msg="runSandbox: unmounting shmPath for sandbox 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.240490623Z" level=info msg="runSandbox: removing pod sandbox from storage: 97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.243397683Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.243417329Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=dfc914d1-d7fc-4334-9b6e-9612a6aaa6a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.243695 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.243744 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.243773 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.243826 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.248438479Z" level=info msg="runSandbox: removing pod sandbox from storage: 6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.251746239Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.251766237Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=826cb564-bbc2-41c0-ae53-1e077125eeb4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.251943 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.251976 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.251996 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.252040 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.258499117Z" level=info msg="runSandbox: removing pod sandbox from storage: d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.258515077Z" level=info msg="runSandbox: removing pod sandbox from storage: 742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.258528413Z" level=info msg="runSandbox: removing pod sandbox from storage: 54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.261711852Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.261729176Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=94f5fafd-05ce-4c76-bcb9-d100d01c8be5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.261921 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.261955 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.261976 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.262013 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.264724978Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.264742988Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=efa2382f-fd3a-45d3-9573-28bc3ef5e8bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.264974 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.265021 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.265047 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.265099 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.267696190Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.267713752Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=817bbca7-9c5b-4be0-bb35-11a4f35e489d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.267879 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.267912 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.267935 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:51:55.267969 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:55.303633 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:55.303840 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:55.303859 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.303919153Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.303950914Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:55.303928 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:51:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:51:55.304004 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304072933Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304102671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304155728Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304181125Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304267715Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304299808Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304279326Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.304353955Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.334005930Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0f8d1072-6d36-45bf-b2bd-8ed7d21e711b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.334029305Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.335399964Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/063015cb-d4f5-449a-8678-1485372cca3d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.335419539Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.336181454Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0069437f-9677-4067-86d2-fd4f1593194b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.336212116Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.337105856Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f2579338-bbbb-47af-9eb5-f0869810a23a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.337127233Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.338264299Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/6e12b2df-f1f2-4cec-b731-5abae1d46979 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:51:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:55.338284832Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-35bda630\x2da4b7\x2d40c5\x2da941\x2d6692db334c05.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-08f14945\x2d4d22\x2d4cd3\x2d9f3a\x2d9d46d77dcdfe.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-32f046f2\x2dadb0\x2d4ae5\x2d8f9a\x2d26cd0a5d7aef.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-54e362346a4217b378cb182854c0efa0ec1d3b18d39e12060a4bd512a0500b9d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-df60da2a\x2d5f41\x2d47e2\x2db93f\x2d79c811b60a68.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6cf4b0a1875d52d80fccd59411c5a480347b65ce23b3c1c99e57c321480624ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-373bedf6\x2dec97\x2d4a03\x2dac98\x2d92bd5054cba1.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-742bf1b14931d33b1ff397a9c4d23b65fee2e138584b1a9b6a7c7ba66c55c189-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-97c400759de90edc7be7dfa9cead2a4715bf4ea2fa5c904dd50e2b06853835b3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d37184ca51d4cb22575c632bcc0a4324f081b34eb248b7eb30d522cf0f2ccd74-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:51:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:51:58.144823201Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:52:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:03.996938 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:52:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:03.997602 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:52:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:06.023039037Z" level=info msg="NetworkStart: stopping network for sandbox c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:06.023227108Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/79716a93-7598-4337-a772-1cb5fc1a0638 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:06.023251280Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:06.023258305Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:06.023265321Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:11.020993072Z" level=info msg="NetworkStart: stopping network for sandbox ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:11.021134508Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/22917cae-9ac2-4a5c-b7e1-ad76884f1467 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:11.021156807Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:11.021163665Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:11.021169624Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:14.996416 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:52:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:14.996935 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:52:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:16.022395199Z" level=info msg="NetworkStart: stopping network for sandbox dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:16.022603101Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/15f50647-72d4-4a15-9c7b-90e9fba93e18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:16.022626900Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:16.022633344Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:16.022639845Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:19.021905860Z" level=info msg="NetworkStart: stopping network for sandbox 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:19.022052398Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b67d2957-1245-4f3f-90cb-6dc93faf2164 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:19.022073590Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:19.022080595Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:19.022086734Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:20.026065119Z" level=info msg="NetworkStart: stopping network for sandbox 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:20.026193666Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/98f75df3-3ac5-42d9-99cb-f2f0fdcdc28c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:20.026220945Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:20.026228193Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:20.026234392Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.032377128Z" level=info msg="NetworkStart: stopping network for sandbox e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.032548800Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/55665bf4-4f2b-41c3-9d3a-d7e19760feb2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.032573874Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.032581547Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.032587315Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.033940958Z" level=info msg="NetworkStart: stopping network for sandbox 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034090724Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3d8a99dc-4404-4ebe-8a3f-27164ee2b521 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034116914Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034128091Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034135646Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034123700Z" level=info msg="NetworkStart: stopping network for sandbox 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034353139Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/9de11bf5-4e32-4bb4-baba-ff78be78849c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034374775Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034382122Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034388999Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034450065Z" level=info msg="NetworkStart: stopping network for sandbox 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034564225Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/2e2e2df6-e46b-4975-8fa6-f63f0f572abc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034590164Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034597300Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:22.034603200Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.023875371Z" level=info msg="NetworkStart: stopping network for sandbox c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024034148Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/93b5f618-81f4-423b-ba75-3b2272ac6232 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024061666Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024069249Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024076973Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024172512Z" level=info msg="NetworkStart: stopping network for sandbox a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024297349Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/9a52b14b-c0cb-4516-858e-e38df84cbf1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024319360Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024326092Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:26.024332522Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873716 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873736 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873742 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873749 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873754 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873761 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.873767 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:27.996775 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.997461005Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=27a44075-6000-4cc3-a2d5-12b43dbf3fc6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.997643111Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=27a44075-6000-4cc3-a2d5-12b43dbf3fc6 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.998124313Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=92726ea2-47a5-4130-9b93-2e9737c71e60 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.998261536Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=92726ea2-47a5-4130-9b93-2e9737c71e60 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.999291784Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=4d688d2e-6218-49e8-9473-891782a3b931 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:52:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:27.999377591Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope. -- Subject: Unit crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.025441634Z" level=info msg="NetworkStart: stopping network for sandbox 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.025588079Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/566b3290-3dc5-4659-b1c0-91dd18611609 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.025612660Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.025619265Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.025625939Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93. -- Subject: Unit crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.111079448Z" level=info msg="Created container 5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=4d688d2e-6218-49e8-9473-891782a3b931 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.111566963Z" level=info msg="Starting container: 5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" id=ee472be1-639f-417e-8c26-28ad461ab195 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.132824326Z" level=info msg="Started container" PID=81345 containerID=5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=ee472be1-639f-417e-8c26-28ad461ab195 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.137100803Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.142226597Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.147834347Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.147855226Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.147868877Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.156747985Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.156775359Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.156787098Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.165624270Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.165642716Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.165653796Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.174775587Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.174793420Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.174803617Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.183080722Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:28.183099078Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:52:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:28.367995 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/185.log" Jan 23 16:52:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:28.369335 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93} Jan 23 16:52:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:28.369545 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:52:28 hub-master-0.workload.bos2.lab conmon[81311]: conmon 5d2a37d32defb2ee58b1 : container 81345 exited with status 1 Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has successfully entered the 'dead' state. Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope: Consumed 569ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope completed and consumed the indicated resources. Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope has successfully entered the 'dead' state. Jan 23 16:52:28 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope: Consumed 56ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93.scope completed and consumed the indicated resources. Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.372837 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/186.log" Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.373369 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/185.log" Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.374489 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" exitCode=1 Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.374511 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93} Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.374532 8631 scope.go:115] "RemoveContainer" containerID="20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" Jan 23 16:52:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:29.375319702Z" level=info msg="Removing container: 20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9" id=2b6916d5-a9dd-42d5-a986-2f7ee3d488f1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:29.375471 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:52:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:29.375972 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:52:29 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-f1c0ea50958b322633221e351446c4eb18786e45333d86b1c20768a182962e3b-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-f1c0ea50958b322633221e351446c4eb18786e45333d86b1c20768a182962e3b-merged.mount has successfully entered the 'dead' state. Jan 23 16:52:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:29.410554818Z" level=info msg="Removed container 20f6bd9bd07e4073a9945c29314e206dab150ecefd1280bbe7997fc96d6f7cd9: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2b6916d5-a9dd-42d5-a986-2f7ee3d488f1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:52:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:30.377366 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/186.log" Jan 23 16:52:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:30.379351 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:52:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:30.379859 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1243] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1248] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1249] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1493] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1494] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1505] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1508] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1509] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1510] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1514] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492758.1518] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:52:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492759.6844] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349242012Z" level=info msg="NetworkStart: stopping network for sandbox ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349607353Z" level=info msg="NetworkStart: stopping network for sandbox f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349634783Z" level=info msg="NetworkStart: stopping network for sandbox bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349618956Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0f8d1072-6d36-45bf-b2bd-8ed7d21e711b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349754665Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349764934Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0069437f-9677-4067-86d2-fd4f1593194b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349797416Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349805057Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349813226Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349765741Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349837983Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349759580Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/063015cb-d4f5-449a-8678-1485372cca3d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349968576Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349979445Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349987438Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352693051Z" level=info msg="NetworkStart: stopping network for sandbox 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.349968532Z" level=info msg="NetworkStart: stopping network for sandbox 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352894613Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f2579338-bbbb-47af-9eb5-f0869810a23a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352926895Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352941442Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352950414Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.352989440Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/6e12b2df-f1f2-4cec-b731-5abae1d46979 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.353105498Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.353125724Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:52:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:40.353140386Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:52:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:44.996658 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:52:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:44.997165 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.034539830Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.034580049Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount has successfully entered the 'dead' state. Jan 23 16:52:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount has successfully entered the 'dead' state. Jan 23 16:52:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-79716a93\x2d7598\x2d4337\x2da772\x2d1cb5fc1a0638.mount has successfully entered the 'dead' state. Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.080446869Z" level=info msg="runSandbox: deleting pod ID c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e from idIndex" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.080474050Z" level=info msg="runSandbox: removing pod sandbox c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.080488283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.080500489Z" level=info msg="runSandbox: unmounting shmPath for sandbox c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.092442460Z" level=info msg="runSandbox: removing pod sandbox from storage: c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.096077241Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:51.096097933Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b7d41cf5-3ab2-43c4-8134-e8a6db31764b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:51.096253 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:52:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:51.096301 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:52:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:51.096324 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:52:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:51.096377 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(c63e272eec2688aff7bb159eef5fad7cb74e1baff6afb25fac21938d24523f2e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.031472842Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.031513042Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount has successfully entered the 'dead' state. Jan 23 16:52:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount has successfully entered the 'dead' state. Jan 23 16:52:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-22917cae\x2d9ac2\x2d4a5c\x2db7e1\x2dad76884f1467.mount has successfully entered the 'dead' state. Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.075356086Z" level=info msg="runSandbox: deleting pod ID ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807 from idIndex" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.075383814Z" level=info msg="runSandbox: removing pod sandbox ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.075406278Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.075418541Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.092450924Z" level=info msg="runSandbox: removing pod sandbox from storage: ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.096078439Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:56.096097354Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4af62f78-ad82-4a43-a64e-77a65033121a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:52:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:56.096289 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:52:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:56.096448 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:52:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:56.096472 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:52:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:56.096516 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ec5c000743b04ec1d72faa783180c630ba55c0081b9c294ff6e849a910b72807): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:52:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:52:58.142539355Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:52:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:52:59.996414 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:52:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:52:59.996939 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.033095126Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.033133768Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount has successfully entered the 'dead' state. Jan 23 16:53:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount has successfully entered the 'dead' state. Jan 23 16:53:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-15f50647\x2d72d4\x2d4a15\x2d9c7b\x2d90e9fba93e18.mount has successfully entered the 'dead' state. Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.083318152Z" level=info msg="runSandbox: deleting pod ID dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b from idIndex" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.083344822Z" level=info msg="runSandbox: removing pod sandbox dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.083361860Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.083376014Z" level=info msg="runSandbox: unmounting shmPath for sandbox dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.099428001Z" level=info msg="runSandbox: removing pod sandbox from storage: dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.102729177Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:01.102750391Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e576cf60-9e56-4ec8-af2d-293465625287 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:01.102907 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:01.102962 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:01.102994 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:01.103051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(dec0bf9ddf6f62ced8ccd9c06073b5febb244c3807e843ff155a5ac59c88fb9b): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:53:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:03.995980 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:03.996361744Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:03.996572821Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.009233119Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/54553189-a2cf-4253-a8f4-b8790747dcbb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.009259695Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.032520637Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.032554288Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount has successfully entered the 'dead' state. Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.072303643Z" level=info msg="runSandbox: deleting pod ID 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9 from idIndex" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.072327182Z" level=info msg="runSandbox: removing pod sandbox 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.072338967Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.072350898Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.084445015Z" level=info msg="runSandbox: removing pod sandbox from storage: 5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.087326232Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:04.087343981Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bf92ee0c-53a2-4fef-93a7-610a18249c48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:04.087534 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:04.087577 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:04.087600 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:04.087646 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount has successfully entered the 'dead' state. Jan 23 16:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b67d2957\x2d1245\x2d4f3f\x2d90cb\x2d6dc93faf2164.mount has successfully entered the 'dead' state. Jan 23 16:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5415e1aead3b1e6c36c96121e2ab7b7e1fa335a47ee7e2369ba46438fb14a8b9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.036906327Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.036940895Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount has successfully entered the 'dead' state. Jan 23 16:53:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount has successfully entered the 'dead' state. Jan 23 16:53:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-98f75df3\x2d3ac5\x2d42d9\x2d99cb\x2df2f0fdcdc28c.mount has successfully entered the 'dead' state. Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.072309563Z" level=info msg="runSandbox: deleting pod ID 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14 from idIndex" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.072335002Z" level=info msg="runSandbox: removing pod sandbox 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.072347844Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.072360454Z" level=info msg="runSandbox: unmounting shmPath for sandbox 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.088439420Z" level=info msg="runSandbox: removing pod sandbox from storage: 91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.091831404Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:05.091849579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=5456df52-adde-4e9c-9c6b-a88bad5082ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:05.092035 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:05.092076 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:53:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:05.092098 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:53:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:05.092145 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(91fb095cb9934dd31e49b63a9bc62a7aa24fb8b2ebf04a1a1b04927d4b2daa14): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:53:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:06.996051 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:06.996407403Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:06.996472949Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.008597462Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/ee925437-8a43-44c7-89e4-7331148c47c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.008620925Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.043581092Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.043616678Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044275053Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044322385Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044381006Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044409410Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044718511Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.044749320Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount has successfully entered the 'dead' state. Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082362879Z" level=info msg="runSandbox: deleting pod ID 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668 from idIndex" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082394050Z" level=info msg="runSandbox: removing pod sandbox 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082409335Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082364777Z" level=info msg="runSandbox: deleting pod ID e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f from idIndex" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082469139Z" level=info msg="runSandbox: removing pod sandbox e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082479431Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082367776Z" level=info msg="runSandbox: deleting pod ID 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990 from idIndex" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082497658Z" level=info msg="runSandbox: removing pod sandbox 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082540364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082554908Z" level=info msg="runSandbox: unmounting shmPath for sandbox 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082504558Z" level=info msg="runSandbox: unmounting shmPath for sandbox 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.082511605Z" level=info msg="runSandbox: unmounting shmPath for sandbox e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.090432620Z" level=info msg="runSandbox: deleting pod ID 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c from idIndex" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.090457969Z" level=info msg="runSandbox: removing pod sandbox 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.090471243Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.090484203Z" level=info msg="runSandbox: unmounting shmPath for sandbox 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.098488663Z" level=info msg="runSandbox: removing pod sandbox from storage: e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.098505768Z" level=info msg="runSandbox: removing pod sandbox from storage: 14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.098497543Z" level=info msg="runSandbox: removing pod sandbox from storage: 188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.099445079Z" level=info msg="runSandbox: removing pod sandbox from storage: 961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.101546398Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.101569268Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=15863b70-204a-47b3-a201-7949e4c40f8d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.101821 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.101866 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.101897 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.101955 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.104867733Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.104888337Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=23338142-8172-464e-a8e1-9ee3276fb288 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.105131 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.105165 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.105185 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.105254 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.107936681Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.107955728Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=af53ad49-4bf5-4327-bdb1-93acefd4875f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.108179 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.108234 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.108258 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.108307 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.111012421Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:07.111031367Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a2577d3b-4728-438f-9dde-e28fd3b65682 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.111246 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.111282 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.111315 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:07.111361 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2e2e2df6\x2de46b\x2d4975\x2d8fa6\x2df63f0f572abc.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9de11bf5\x2d4e32\x2d4bb4\x2dbaba\x2dff78be78849c.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d8a99dc\x2d4404\x2d4ebe\x2d8a3f\x2d27164ee2b521.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-55665bf4\x2d4f2b\x2d41c3\x2d9d3a\x2dd7e19760feb2.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-188d06218b251e5249013b92f001e70de144ba02139e593a3026c1f49b462990-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-14a55629f2861da71b1c1f62e42f55a4b3de5066e49777c6a4abe5caf86c1b4c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-961f7bf3c01af1732e9ebfd082d39bbf08abebae9e9fbd4abdbcbedf3234b668-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e75c0ddde896e70a791d3842cb8785b65ece6a8bf15b1806f80d2e6f12f8999f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:10.997142 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:53:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:10.997787 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.035141470Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.035187506Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.036502808Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.036539782Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.066327755Z" level=info msg="runSandbox: deleting pod ID a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e from idIndex" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.066353326Z" level=info msg="runSandbox: removing pod sandbox a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.066367830Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.066387767Z" level=info msg="runSandbox: unmounting shmPath for sandbox a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.070291433Z" level=info msg="runSandbox: deleting pod ID c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8 from idIndex" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.070320706Z" level=info msg="runSandbox: removing pod sandbox c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.070336006Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.070351568Z" level=info msg="runSandbox: unmounting shmPath for sandbox c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.083403924Z" level=info msg="runSandbox: removing pod sandbox from storage: a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.086962740Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.086983003Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=1d8d682c-dc6e-4347-a967-cf79cd5d31c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.087187 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.087233 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.087255 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.087304 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.087453783Z" level=info msg="runSandbox: removing pod sandbox from storage: c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.090822337Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:11.090842652Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=26f6e5fb-713f-4818-8b84-b3ec963f6f6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.090982 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.091016 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.091036 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:11.091081 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9a52b14b\x2dc0cb\x2d4516\x2d858e\x2de38df84cbf1e.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-93b5f618\x2d81f4\x2d423b\x2dba75\x2d3b2272ac6232.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a0aab12e9b819286f233ed0616c7c665e5ba3a6dddf99d07d8e6709638ec8c8e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c59712cc89be368f9ea8eec969baefd3f40e00aa69a315acef01a259ad978ca8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:12.995670 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:12.996029437Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:12.996082849Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.011909304Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/9ac0b01a-e80e-4adf-80a8-245e86630e53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.011936843Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.036009351Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.036047961Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount has successfully entered the 'dead' state. Jan 23 16:53:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount has successfully entered the 'dead' state. Jan 23 16:53:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-566b3290\x2d3dc5\x2d4659\x2db1c0\x2d91dd18611609.mount has successfully entered the 'dead' state. Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.071323788Z" level=info msg="runSandbox: deleting pod ID 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3 from idIndex" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.071347540Z" level=info msg="runSandbox: removing pod sandbox 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.071361392Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.071373183Z" level=info msg="runSandbox: unmounting shmPath for sandbox 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.087445676Z" level=info msg="runSandbox: removing pod sandbox from storage: 99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.090299777Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:13.090318971Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=d6ea29cc-2682-4e6d-8d7d-c7ba233e244d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:13.090543 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:13.090586 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:13.090609 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:13.090670 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:53:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-99924d315c8c1ed459bc033012d554d9202c56ed4d421a21e37d0d0ef134f3f3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:15.995636 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:53:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:15.996076928Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:15.996131441Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:16.008153955Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5e45a436-896b-40eb-9635-031a83465a5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:16.008179431Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:16.995811 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:53:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:16.996155658Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:16.996193921Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:17.007248925Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/77110927-3518-4e68-88c9-835e9b851213 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:17.007275547Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:18.995901 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:18.996273175Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:18.996324574Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:19.008089167Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bdf472a4-e8f0-4de8-bf31-4728e92c370d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:19.008109557Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:20.995392 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:53:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:20.995688575Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:20.995726872Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.006397066Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/e50e620e-49df-457f-aecc-c9aa7547125b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.006417939Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:21.996543 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:21.996671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.996893725Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.996927087Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.996986118Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:21.997011828Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.012028368Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/958052f3-7d95-42ec-9e87-e70b1e0118ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.012258737Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.012482095Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/3ee8fc45-7d19-4065-a227-57281ec7f29b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.012503441Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:22.995949 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:22.996153 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.996379700Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.996418459Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.996457837Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:22.996429066Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:23.015064540Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4b29cfad-99ac-46b1-bce3-de21d072b246 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:23.015092104Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:23.015899029Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c8a62840-822a-4e51-ae64-2b8116d0d446 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:23.015922427Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:24.995462 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:53:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:24.995858550Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:24.995916856Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.008077469Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/95e2a56e-c61f-4c44-b6f6-1290782f492a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.008099416Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.360960880Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.360993045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.363886208Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.363922493Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.363953635Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.363986732Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.364292608Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.364327283Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.364762456Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.364794603Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount has successfully entered the 'dead' state. Jan 23 16:53:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount has successfully entered the 'dead' state. Jan 23 16:53:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount has successfully entered the 'dead' state. Jan 23 16:53:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount has successfully entered the 'dead' state. Jan 23 16:53:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount has successfully entered the 'dead' state. Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404327109Z" level=info msg="runSandbox: deleting pod ID 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87 from idIndex" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404359149Z" level=info msg="runSandbox: removing pod sandbox 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404377969Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404330087Z" level=info msg="runSandbox: deleting pod ID f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e from idIndex" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404416570Z" level=info msg="runSandbox: removing pod sandbox f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404429701Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404442915Z" level=info msg="runSandbox: unmounting shmPath for sandbox f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.404419517Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.405307125Z" level=info msg="runSandbox: deleting pod ID ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd from idIndex" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.405332176Z" level=info msg="runSandbox: removing pod sandbox ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.405344969Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.405357069Z" level=info msg="runSandbox: unmounting shmPath for sandbox ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.408303716Z" level=info msg="runSandbox: deleting pod ID bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48 from idIndex" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.408328016Z" level=info msg="runSandbox: removing pod sandbox bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.408340568Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.408355160Z" level=info msg="runSandbox: unmounting shmPath for sandbox bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.412306556Z" level=info msg="runSandbox: deleting pod ID 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b from idIndex" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.412330110Z" level=info msg="runSandbox: removing pod sandbox 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.412341981Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.412352822Z" level=info msg="runSandbox: unmounting shmPath for sandbox 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.416445002Z" level=info msg="runSandbox: removing pod sandbox from storage: f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.416476818Z" level=info msg="runSandbox: removing pod sandbox from storage: 4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.416504162Z" level=info msg="runSandbox: removing pod sandbox from storage: ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.419399587Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.419419847Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b6cf6ee5-da16-41bb-bfb6-a656c03f32df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.419701 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.419758 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.419783 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.419837 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.420457636Z" level=info msg="runSandbox: removing pod sandbox from storage: bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.423059738Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.423079331Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2998aa28-2dd2-47eb-8b5f-9ce12c0bf44c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.423309 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.423352 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.423373 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.423414 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.426363959Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.426385042Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8c9af3ab-72fd-45fb-9a3b-ab234bdd0e82 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.426618 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.426655 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.426679 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.426726 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.428440734Z" level=info msg="runSandbox: removing pod sandbox from storage: 108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.429571196Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.429593589Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d5dac644-7071-43b3-b426-486d65b4e03c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.429834 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.429871 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.429893 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.429929 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.432701660Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.432722404Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ac719fbf-e3e7-4e1d-a3bc-139fabe60fa3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.432899 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.432937 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.432962 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.433015 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.482941 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.483046 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.483067 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.483244 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.483386 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483348793Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483383006Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483356372Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483464785Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483496629Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483527562Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483429965Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483657089Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483736912Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.483769188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.509706288Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/34c1e02a-0d3e-4270-9a86-9e135c0f7338 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.509730420Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.510467874Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5e0ec4cc-3d5d-4f15-9043-6ba817e9e40a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.510487003Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.514333243Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/b500d350-dd17-4db1-bd48-cca9c6bf0c00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.514355566Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.518097123Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/44ee848a-e163-4b13-9c14-e19907d8f1a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.518121038Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.519221024Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5e18e7e9-3212-4094-9919-9cd1fc5ac59f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:25.519244061Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:25.997187 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:25.997755 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6e12b2df\x2df1f2\x2d4cec\x2db731\x2d5abae1d46979.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f2579338\x2dbbbb\x2d47af\x2d9eb5\x2df0869810a23a.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0069437f\x2d9677\x2d4067\x2d86d2\x2dfd4f1593194b.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-063015cb\x2dd4f5\x2d449a\x2d8678\x2d1485372cca3d.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0f8d1072\x2d6d36\x2d45bf\x2db2bd\x2d8ed7d21e711b.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bf40e4e198aa90c39457a93dd38b42d5e89fc6577449d4961bbda977e5512b48-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-108940cd7346527e71312bc124200c490c9691d701a2bfd9d63552859c1fa68b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4a39d08eb377e4588ff14bd362914808dd971bafe2ef52a6f551efc008a77f87-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f9b0b291e6f4f7d22afc17f9b12d790dc9d3968757ca7548efa67d5cda9f199e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ce3e710e15516dbb56f45ed79bdd265b3267e27459be4d0c2b502b38cedeccfd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874798 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874817 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874825 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874832 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874840 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874847 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:27.874854 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:28.143478105Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:53:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:36.996813 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:53:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:36.997458 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:53:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:53:48.996674 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:53:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:53:48.997222 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:53:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:49.023023160Z" level=info msg="NetworkStart: stopping network for sandbox ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:49.023331715Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/54553189-a2cf-4253-a8f4-b8790747dcbb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:49.023359986Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:53:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:49.023367892Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:53:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:49.023374942Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:52.020685827Z" level=info msg="NetworkStart: stopping network for sandbox 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:52.020878293Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/ee925437-8a43-44c7-89e4-7331148c47c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:52.020901898Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:53:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:52.020910219Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:53:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:52.020916216Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.025837051Z" level=info msg="NetworkStart: stopping network for sandbox a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.026024591Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/9ac0b01a-e80e-4adf-80a8-245e86630e53 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.026048745Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.026055812Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.026063097Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:53:58.146423061Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:54:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:00.996077 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:54:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:00.996697 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:54:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:01.020536116Z" level=info msg="NetworkStart: stopping network for sandbox 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:01.020693841Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5e45a436-896b-40eb-9635-031a83465a5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:01.020717092Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:01.020724822Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:01.020731150Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:02.021105912Z" level=info msg="NetworkStart: stopping network for sandbox ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:02.021268993Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/77110927-3518-4e68-88c9-835e9b851213 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:02.021296316Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:02.021303539Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:02.021309810Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:04.021282728Z" level=info msg="NetworkStart: stopping network for sandbox f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:04.021435177Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bdf472a4-e8f0-4de8-bf31-4728e92c370d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:04.021461002Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:04.021468650Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:04.021475133Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:06.019372131Z" level=info msg="NetworkStart: stopping network for sandbox dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:06.019526367Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/e50e620e-49df-457f-aecc-c9aa7547125b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:06.019550969Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:06.019557831Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:06.019563974Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.024985439Z" level=info msg="NetworkStart: stopping network for sandbox 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025162812Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/958052f3-7d95-42ec-9e87-e70b1e0118ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025190772Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025199728Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025213908Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025263635Z" level=info msg="NetworkStart: stopping network for sandbox 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025428985Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/3ee8fc45-7d19-4065-a227-57281ec7f29b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025456886Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025464169Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:07.025471226Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028203792Z" level=info msg="NetworkStart: stopping network for sandbox 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028391170Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c8a62840-822a-4e51-ae64-2b8116d0d446 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028415756Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028423624Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028429862Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028444440Z" level=info msg="NetworkStart: stopping network for sandbox 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028584504Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4b29cfad-99ac-46b1-bce3-de21d072b246 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028606945Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028612878Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:08.028619008Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492848.1285] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:54:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492848.1290] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:54:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492848.1291] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:54:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492848.1307] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:54:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674492848.1308] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.021473958Z" level=info msg="NetworkStart: stopping network for sandbox 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.021684441Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/95e2a56e-c61f-4c44-b6f6-1290782f492a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.021707353Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.021714181Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.021720328Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.522551539Z" level=info msg="NetworkStart: stopping network for sandbox 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.522679012Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5e0ec4cc-3d5d-4f15-9043-6ba817e9e40a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.522699652Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.522707220Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.522712754Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.523506811Z" level=info msg="NetworkStart: stopping network for sandbox 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.523608856Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/34c1e02a-0d3e-4270-9a86-9e135c0f7338 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.523628725Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.523634919Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.523640311Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.527810638Z" level=info msg="NetworkStart: stopping network for sandbox 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.527951367Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/b500d350-dd17-4db1-bd48-cca9c6bf0c00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.527974912Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.527982406Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.527989622Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.529342067Z" level=info msg="NetworkStart: stopping network for sandbox aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.529445885Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/44ee848a-e163-4b13-9c14-e19907d8f1a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.529470203Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.529477301Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.529482733Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.530929656Z" level=info msg="NetworkStart: stopping network for sandbox 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.531039325Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5e18e7e9-3212-4094-9919-9cd1fc5ac59f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.531061312Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.531069443Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:10.531076451Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:14.996463 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:54:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:14.996963 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:54:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:25.997107 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:54:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:25.997747 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.874953 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.874974 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.874981 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.874987 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.874993 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.875000 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:27.875006 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:54:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:28.143584186Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.034363930Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.034414043Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount has successfully entered the 'dead' state. Jan 23 16:54:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount has successfully entered the 'dead' state. Jan 23 16:54:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-54553189\x2da2cf\x2d4253\x2da8f4\x2db8790747dcbb.mount has successfully entered the 'dead' state. Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.068346911Z" level=info msg="runSandbox: deleting pod ID ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75 from idIndex" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.068386451Z" level=info msg="runSandbox: removing pod sandbox ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.068403922Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.068419949Z" level=info msg="runSandbox: unmounting shmPath for sandbox ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.080501019Z" level=info msg="runSandbox: removing pod sandbox from storage: ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.083367768Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:34.083386607Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=6cfa2865-f67f-446d-aacc-cb7fe1a6b42c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:34.083585 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:34.083634 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:34.083658 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:34.083706 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ab2a47bba617b7dad723a4103fee51c4b33222b86841cb74423768a366831b75): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.031988847Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.032023700Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount has successfully entered the 'dead' state. Jan 23 16:54:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount has successfully entered the 'dead' state. Jan 23 16:54:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee925437\x2d8a43\x2d44c7\x2d89e4\x2d7331148c47c6.mount has successfully entered the 'dead' state. Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.079310361Z" level=info msg="runSandbox: deleting pod ID 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7 from idIndex" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.079334051Z" level=info msg="runSandbox: removing pod sandbox 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.079346770Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.079360893Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.095455220Z" level=info msg="runSandbox: removing pod sandbox from storage: 0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.099084180Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:37.099102502Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c96228ef-8275-40b7-b37b-07a3f3a96598 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:37.099329 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:37.099497 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:37.099519 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:37.099571 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0e514b5c4668d907ac33fe89aa2775fc0e7a3fe08b71a593cedf98a24a934cc7): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:37.996994 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:54:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:37.997525 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.037376594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.037421169Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount has successfully entered the 'dead' state. Jan 23 16:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount has successfully entered the 'dead' state. Jan 23 16:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9ac0b01a\x2de80e\x2d4adf\x2d80a8\x2d245e86630e53.mount has successfully entered the 'dead' state. Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.076314019Z" level=info msg="runSandbox: deleting pod ID a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91 from idIndex" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.076345152Z" level=info msg="runSandbox: removing pod sandbox a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.076364392Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.076378317Z" level=info msg="runSandbox: unmounting shmPath for sandbox a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.093444207Z" level=info msg="runSandbox: removing pod sandbox from storage: a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.096877378Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:43.096895841Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=dda94a04-086d-4999-9f26-6f95f7fd0480 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:43.097108 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:43.097156 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:43.097183 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:43.097239 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(a29e228e4d1e1633c4c335f3bea8751def5de34087323c6130624419a882ee91): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.032701115Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.032747398Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount has successfully entered the 'dead' state. Jan 23 16:54:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount has successfully entered the 'dead' state. Jan 23 16:54:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5e45a436\x2d896b\x2d40eb\x2d9635\x2d031a83465a5d.mount has successfully entered the 'dead' state. Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.068306826Z" level=info msg="runSandbox: deleting pod ID 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52 from idIndex" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.068334181Z" level=info msg="runSandbox: removing pod sandbox 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.068348880Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.068360805Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.080434146Z" level=info msg="runSandbox: removing pod sandbox from storage: 20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.083836935Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:46.083855327Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6a1cc2d9-51d4-4f98-bf81-683ee734dc38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:46.084086 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:46.084130 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:46.084153 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:46.084198 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(20520ff7970e178effb6955309714e95f63d73356c8da79dbef8bf319c53fa52): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.032356489Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.032398735Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount has successfully entered the 'dead' state. Jan 23 16:54:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount has successfully entered the 'dead' state. Jan 23 16:54:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-77110927\x2d3518\x2d4e68\x2d88c9\x2d835e9b851213.mount has successfully entered the 'dead' state. Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.075308898Z" level=info msg="runSandbox: deleting pod ID ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7 from idIndex" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.075336131Z" level=info msg="runSandbox: removing pod sandbox ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.075354118Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.075370430Z" level=info msg="runSandbox: unmounting shmPath for sandbox ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.091429824Z" level=info msg="runSandbox: removing pod sandbox from storage: ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.094847146Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.094865363Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e5bcb448-5d6e-4ec7-bc05-4606fe74b05d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:47.095091 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:47.095161 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:47.095183 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:47.095242 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffc8c3f42451cf524e584cc05e1e2e38648d541920e9067fd14c74d65ff287f7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:47.996973 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.997363186Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:47.997417953Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:48.010819832Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/98aa1b61-c56a-43d4-ab31-9370f21a8490 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:48.010845229Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.031007423Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.031048022Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount has successfully entered the 'dead' state. Jan 23 16:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount has successfully entered the 'dead' state. Jan 23 16:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bdf472a4\x2de8f0\x2d4de8\x2dbf31\x2d4728e92c370d.mount has successfully entered the 'dead' state. Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.073284112Z" level=info msg="runSandbox: deleting pod ID f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad from idIndex" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.073310615Z" level=info msg="runSandbox: removing pod sandbox f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.073328858Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.073345108Z" level=info msg="runSandbox: unmounting shmPath for sandbox f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.093420539Z" level=info msg="runSandbox: removing pod sandbox from storage: f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.096203999Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:49.096228226Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=4d7a5b05-f1f2-4425-bee9-7bc0ba9d58ce name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:49.096430 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:49.096477 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:49.096511 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:49.096556 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f00555789c3be9dfa6b3b46d5b262bed1eb259563216f374bbdd6b2e8449a4ad): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:49.996253 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:49.996814 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.030728928Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.030769373Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount has successfully entered the 'dead' state. Jan 23 16:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount has successfully entered the 'dead' state. Jan 23 16:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e50e620e\x2d49df\x2d457f\x2daecc\x2dc9aa7547125b.mount has successfully entered the 'dead' state. Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.066313969Z" level=info msg="runSandbox: deleting pod ID dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987 from idIndex" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.066341836Z" level=info msg="runSandbox: removing pod sandbox dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.066357858Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.066371118Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.079513234Z" level=info msg="runSandbox: removing pod sandbox from storage: dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.082837661Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.082857592Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=e44fc214-8ed3-4ede-a525-3fff091571d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:51.083064 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:51.083108 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:51.083131 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:51.083180 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd060f9b9816242619bde8a306e869b39de8c63bd27e9d175ea3f50c5010b987): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:51.996507 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.996783711Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:51.997025751Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.008737775Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/479e28db-b52c-4890-886b-e2423494d70d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.008760303Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.036037654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.036071828Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.036086142Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.036125798Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.072292822Z" level=info msg="runSandbox: deleting pod ID 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9 from idIndex" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.072323373Z" level=info msg="runSandbox: removing pod sandbox 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.072340868Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.072355242Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.073292107Z" level=info msg="runSandbox: deleting pod ID 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746 from idIndex" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.073318124Z" level=info msg="runSandbox: removing pod sandbox 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.073331021Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.073344389Z" level=info msg="runSandbox: unmounting shmPath for sandbox 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.091479378Z" level=info msg="runSandbox: removing pod sandbox from storage: 3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.092402242Z" level=info msg="runSandbox: removing pod sandbox from storage: 746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.094373533Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.094393580Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=963879bd-f751-405a-b6b4-bb7c2d4ee50b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.094676 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.094725 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.094749 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.094801 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.097617716Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:52.097636107Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0838b6b6-f19b-4270-8cb0-716f3f17d8cf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.097857 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.097896 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.097922 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:52.097965 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ee8fc45\x2d7d19\x2d4065\x2da227\x2d57281ec7f29b.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-958052f3\x2d7d95\x2d42ec\x2d9e87\x2de70b1e0118ae.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-746b7549ca88bf1b4b2af08bcb29c0afcefa49bf35f63ec9f11cdad8e2134746-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3f90746cd7dd26747272e5e91df16ff90af7ce7e3bb01e7f092d085061208fc9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.039124808Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.039165860Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.039483574Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.039545329Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c8a62840\x2d822a\x2d4e51\x2dae64\x2d2b8116d0d446.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4b29cfad\x2d99ac\x2d46b1\x2dbce3\x2dde21d072b246.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080328309Z" level=info msg="runSandbox: deleting pod ID 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8 from idIndex" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080359197Z" level=info msg="runSandbox: removing pod sandbox 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080375592Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080333403Z" level=info msg="runSandbox: deleting pod ID 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650 from idIndex" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080409571Z" level=info msg="runSandbox: removing pod sandbox 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080419968Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080433339Z" level=info msg="runSandbox: unmounting shmPath for sandbox 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.080419676Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.092485226Z" level=info msg="runSandbox: removing pod sandbox from storage: 3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.092495035Z" level=info msg="runSandbox: removing pod sandbox from storage: 016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.095831026Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.095850213Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4ea0b646-3ab7-480c-b617-f377aadb6b7a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.096134 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.096175 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.096197 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.096249 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.099054064Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:53.099072260Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=5e7ce24a-e908-4af6-a9e5-913116bbd6b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.099255 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.099290 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.099311 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:53.099348 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-016d85bee70689112e9fde790ac473148ef6f1d5f8681970a4d19745f9225650-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3f7f296eeed5ce54e810e43181bfdc1e13abbbc7cf915e7a3b16ac3d2569d3f8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:54.996358 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:54:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:54.996683394Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:54.996724708Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.012433799Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/483813b1-47b4-4c5d-8e27-87aacb3f643e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.012459998Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.032249746Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.032285650Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount has successfully entered the 'dead' state. Jan 23 16:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount has successfully entered the 'dead' state. Jan 23 16:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-95e2a56e\x2dc61f\x2d4c44\x2db6f6\x2d1290782f492a.mount has successfully entered the 'dead' state. Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.068311369Z" level=info msg="runSandbox: deleting pod ID 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768 from idIndex" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.068336255Z" level=info msg="runSandbox: removing pod sandbox 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.068351382Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.068363723Z" level=info msg="runSandbox: unmounting shmPath for sandbox 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.080446986Z" level=info msg="runSandbox: removing pod sandbox from storage: 71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.083159811Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.083178168Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=a8f5cadb-eefd-44ba-9ec3-e1b5381fc0e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.083446 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.083485 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.083510 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.083555 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.532655175Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.532688005Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.533714029Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.533749803Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.539590003Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.539616686Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.539790831Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.539822383Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.541963875Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.541992001Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576307601Z" level=info msg="runSandbox: deleting pod ID 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f from idIndex" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576337816Z" level=info msg="runSandbox: removing pod sandbox 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576307822Z" level=info msg="runSandbox: deleting pod ID 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116 from idIndex" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576375637Z" level=info msg="runSandbox: removing pod sandbox 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576390483Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576403329Z" level=info msg="runSandbox: unmounting shmPath for sandbox 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576352373Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.576539260Z" level=info msg="runSandbox: unmounting shmPath for sandbox 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.584312738Z" level=info msg="runSandbox: deleting pod ID aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a from idIndex" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.584337156Z" level=info msg="runSandbox: removing pod sandbox aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.584349131Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.584362754Z" level=info msg="runSandbox: unmounting shmPath for sandbox aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.588314940Z" level=info msg="runSandbox: deleting pod ID 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb from idIndex" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.588338960Z" level=info msg="runSandbox: removing pod sandbox 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.588353923Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.588364546Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.588550507Z" level=info msg="runSandbox: removing pod sandbox from storage: 487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.591794964Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.591814622Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=81f54539-1af7-42ea-830e-2c04da954bd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.591947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.591985 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.592008 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.592051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.596330224Z" level=info msg="runSandbox: deleting pod ID 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d from idIndex" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.596354858Z" level=info msg="runSandbox: removing pod sandbox 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.596367293Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.596382488Z" level=info msg="runSandbox: unmounting shmPath for sandbox 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.596490441Z" level=info msg="runSandbox: removing pod sandbox from storage: 00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.599781630Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.599800018Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=7c63c343-ff46-4ce0-839d-a8efcd52f2f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.599987 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.600023 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.600044 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.600082 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.600424070Z" level=info msg="runSandbox: removing pod sandbox from storage: aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.603429796Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.603450120Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=123209ad-45c0-49a7-b350-4952fca78bd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.603621 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.603655 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.603679 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.603719 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.604448509Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.607623252Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.607642520Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2d8d75ef-529e-4111-9aad-0ef86d627e42 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.607810 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.607843 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.607864 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.607902 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.609413027Z" level=info msg="runSandbox: removing pod sandbox from storage: 732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.612736971Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.612757853Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=d851989e-9e57-4512-91e8-d1d2a1720ccf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.612929 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.612961 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.612981 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:54:55.613021 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:55.654602 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:55.654791 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:55.654853 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.654905391Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.654940793Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:55.654977 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655044638Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655074122Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655125803Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655151024Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:55.655154 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655261359Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655299810Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655347548Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.655369280Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.680082222Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/eba1d058-ae7b-4ea0-bfbf-fb1c4938ea1c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.680103481Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.683151546Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f80cf19a-f329-4279-b9cc-028a4ab9a5c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.683178077Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.686277625Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c583de43-61e3-4769-b51f-54b20bfd36f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.686299308Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.690904812Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/bd032caa-085e-4f41-9a2d-847ab13ee905 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.690927191Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.691799529Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/14a66fc4-e09e-44d6-b9a2-3ca11abf8f97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:55.691822678Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5e18e7e9\x2d3212\x2d4094\x2d9919\x2d9cd1fc5ac59f.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-44ee848a\x2de163\x2d4b13\x2d9c14\x2de19907d8f1a2.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b500d350\x2ddd17\x2d4db1\x2dbd48\x2dcca9c6bf0c00.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-732a89b19a4234d97327c09be63d640df412d53d042aca9c88be24737de87e0d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aa729ef5f27353e0efffc7495a01367e0ada136c9d2cf99e69c684693becfd2a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5e0ec4cc\x2d3d5d\x2d4f15\x2d9043\x2d6ba817e9e40a.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-34c1e02a\x2d0d3e\x2d4270\x2d9a86\x2d9e135c0f7338.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-00051e0a521adcbfab584e92d2e6fa6fec1c57efb39505ef1e127ff6211fc116-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ade358188311b0e33f78ca5abf5b2c3aead187ef04064423316332317af84bb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-487059f4ff232a34ef0e7a5558f254c2388d575d7c5eb0714817a756186ec70f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-71e336f91c78498d3cf0e9b6bb1d5713f0760bba247dfc167ccbd4836b7ac768-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:54:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:56.996346 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:54:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:56.997017203Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:56.997069678Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:57.007827208Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cea206dc-a082-4e5f-81ed-b630b336f6aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:57.007847381Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:58.143291509Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.544960 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-8sqsv] Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.544998 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.550950 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-8sqsv] Jan 23 16:54:58 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice. -- Subject: Unit kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice has finished starting up. -- -- The start-up result is done. Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.589737 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.589764 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.589787 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdss6\" (UniqueName: \"kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.589804 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.690828 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.690855 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.690878 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-wdss6\" (UniqueName: \"kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.690897 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.691030 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.691078 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.691237 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.705098 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdss6\" (UniqueName: \"kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6\") pod \"cni-sysctl-allowlist-ds-8sqsv\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:54:58.861683 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:58.862083861Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-8sqsv/POD" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:58.862122313Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:58.872839526Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-8sqsv Namespace:openshift-multus ID:d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a UID:8acc2969-4d66-4e71-9dde-218ccafac14e NetNS:/var/run/netns/0f5fbc1b-3b37-48d7-9343-173332f77187 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:54:58.872858756Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-8sqsv to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:01.996062 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:55:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:01.996102 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:01.996414612Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:01.996721682Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:01.996465395Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:01.996865290Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:02.012980612Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/403733a4-39f0-4a54-bf46-6e0221d2fd36 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:02.013006600Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:02.012988219Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d6b023a8-1405-4ce6-b970-8f754d245373 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:02.013163942Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:02.996750 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:55:02.997283 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:55:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:03.996292 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:03.996611405Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:03.996651370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:04.007743418Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/506c1408-374f-4a53-b76f-1dcab3ec1b79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:04.007764045Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:04.995597 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:04.995868177Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:04.995908985Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.006496272Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/e3a2ad9a-1f8d-42f3-88ec-1a2b153823f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.006516169Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:05.996399 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:55:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:05.996480 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.996736664Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.996772398Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.996847537Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:05.996890050Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.016381748Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/73661a5b-6791-4790-b23b-f0d78fa720e9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.016409107Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.017180151Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/75d5bd89-7f27-4648-a878-1bf3bba45d47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.017203325Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:06.996556 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:55:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:06.996893 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.996898511Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.996930375Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.997203439Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:06.997238887Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:55:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:07.012208577Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b3499a95-d9a9-4a50-b9a7-8099ed306942 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:07.012230034Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:07.012210622Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/5e1b7bf8-aa90-4cee-8a4a-6a5967bc5df1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:07.012387476Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00098|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 16:55:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:18.002845 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:55:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:55:18.003576 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876100 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876120 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876127 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876135 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876141 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876149 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:27.876156 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:55:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:27.881743282Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=06ba5ce6-0732-4af2-bc13-b10cdf5841e2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:55:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:27.881860588Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=06ba5ce6-0732-4af2-bc13-b10cdf5841e2 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:55:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:28.143870395Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:55:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:28.996541 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:55:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:55:28.997065 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:55:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:33.023543036Z" level=info msg="NetworkStart: stopping network for sandbox d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:33.023703929Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/98aa1b61-c56a-43d4-ab31-9370f21a8490 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:33.023729595Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:33.023737459Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:33.023745334Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:37.022468398Z" level=info msg="NetworkStart: stopping network for sandbox a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:37.022630848Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/479e28db-b52c-4890-886b-e2423494d70d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:37.022661552Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:37.022668849Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:37.022675274Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:39.996859 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:55:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:55:39.997519 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.026458284Z" level=info msg="NetworkStart: stopping network for sandbox 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.026607665Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/483813b1-47b4-4c5d-8e27-87aacb3f643e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.026629884Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.026636497Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.026644275Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.697826348Z" level=info msg="NetworkStart: stopping network for sandbox 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.697922933Z" level=info msg="NetworkStart: stopping network for sandbox 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.697968838Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f80cf19a-f329-4279-b9cc-028a4ab9a5c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.697991571Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.697998995Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.698005046Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.698075005Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/eba1d058-ae7b-4ea0-bfbf-fb1c4938ea1c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.698104128Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.698111769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.698117959Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.699284596Z" level=info msg="NetworkStart: stopping network for sandbox 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.699434857Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c583de43-61e3-4769-b51f-54b20bfd36f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.699467983Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.699478979Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.699488165Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.703777411Z" level=info msg="NetworkStart: stopping network for sandbox 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.703881827Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/bd032caa-085e-4f41-9a2d-847ab13ee905 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.703902795Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.703912316Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.703921597Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.704133959Z" level=info msg="NetworkStart: stopping network for sandbox 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.704249847Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/14a66fc4-e09e-44d6-b9a2-3ca11abf8f97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.704272892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.704282508Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:40.704289415Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:42.022772684Z" level=info msg="NetworkStart: stopping network for sandbox 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:42.022918207Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cea206dc-a082-4e5f-81ed-b630b336f6aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:42.022942693Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:42.022949468Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:42.022955940Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:43.887080551Z" level=info msg="NetworkStart: stopping network for sandbox d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:43.887240190Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-8sqsv Namespace:openshift-multus ID:d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a UID:8acc2969-4d66-4e71-9dde-218ccafac14e NetNS:/var/run/netns/0f5fbc1b-3b37-48d7-9343-173332f77187 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:43.887264599Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:43.887272019Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:43.887278503Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-8sqsv from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.025267884Z" level=info msg="NetworkStart: stopping network for sandbox 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.025410599Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d6b023a8-1405-4ce6-b970-8f754d245373 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.025433036Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.025440515Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.025446745Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.026369801Z" level=info msg="NetworkStart: stopping network for sandbox 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.026479681Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/403733a4-39f0-4a54-bf46-6e0221d2fd36 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.026498977Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.026505689Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:47.026512089Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:49.020780927Z" level=info msg="NetworkStart: stopping network for sandbox 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:49.020919543Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/506c1408-374f-4a53-b76f-1dcab3ec1b79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:49.020941842Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:49.020949106Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:49.020955509Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:50.018677885Z" level=info msg="NetworkStart: stopping network for sandbox 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:50.018825896Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/e3a2ad9a-1f8d-42f3-88ec-1a2b153823f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:50.018853434Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:50.018861510Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:50.018867720Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.028675759Z" level=info msg="NetworkStart: stopping network for sandbox 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.028812215Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/73661a5b-6791-4790-b23b-f0d78fa720e9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.028833572Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.028840605Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.028846820Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.030935686Z" level=info msg="NetworkStart: stopping network for sandbox 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.031073974Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/75d5bd89-7f27-4648-a878-1bf3bba45d47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.031098057Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.031104811Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:51.031111248Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:51.996717 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:55:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:55:51.997256 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.024816604Z" level=info msg="NetworkStart: stopping network for sandbox d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.024977272Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/5e1b7bf8-aa90-4cee-8a4a-6a5967bc5df1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025005339Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025013013Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025020657Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025123449Z" level=info msg="NetworkStart: stopping network for sandbox e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025264196Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b3499a95-d9a9-4a50-b9a7-8099ed306942 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025286360Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025294795Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:55:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:52.025300988Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:55:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:55:58.143446518Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:55:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:55:58.573960 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-8sqsv] Jan 23 16:56:02 hub-master-0.workload.bos2.lab conmon[69663]: conmon 7a1568c8ffde10fbf461 : container 69675 exited with status 1 Jan 23 16:56:02 hub-master-0.workload.bos2.lab systemd[1]: crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has successfully entered the 'dead' state. Jan 23 16:56:02 hub-master-0.workload.bos2.lab systemd[1]: crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope: Consumed 3.671s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope completed and consumed the indicated resources. Jan 23 16:56:02 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope has successfully entered the 'dead' state. Jan 23 16:56:02 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope: Consumed 53ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac.scope completed and consumed the indicated resources. Jan 23 16:56:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:03.778841 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac" exitCode=1 Jan 23 16:56:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:03.778912 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac} Jan 23 16:56:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:03.779003 8631 scope.go:115] "RemoveContainer" containerID="b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1" Jan 23 16:56:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:03.779252 8631 scope.go:115] "RemoveContainer" containerID="7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.779668265Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=2d9c9afc-66ff-459a-a885-97da3f85a29f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.779831168Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2d9c9afc-66ff-459a-a885-97da3f85a29f name=/runtime.v1.ImageService/ImageStatus Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.780013852Z" level=info msg="Removing container: b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1" id=1cd230b0-8f02-4c65-95a1-81558cb34c54 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.780203746Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=c03e5fdf-2b5b-47ff-bc87-46a2f7d57f19 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.780293292Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c03e5fdf-2b5b-47ff-bc87-46a2f7d57f19 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.781126572Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=0dbe4936-d207-486e-b66c-9f5288f722b0 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.781210944Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:03 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-326148b73317003312004557c38c5703821d5a409d66d9ccc1a2db784eb9e5e3-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-326148b73317003312004557c38c5703821d5a409d66d9ccc1a2db784eb9e5e3-merged.mount has successfully entered the 'dead' state. Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.815774118Z" level=info msg="Removed container b70eaf26f79b964d71d02c220438c6926e41c62d3028ae8f0a279681c499c8b1: openshift-multus/multus-cdt6c/kube-multus" id=1cd230b0-8f02-4c65-95a1-81558cb34c54 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:56:03 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope. -- Subject: Unit crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:56:03 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5. -- Subject: Unit crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.930860594Z" level=info msg="Created container ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5: openshift-multus/multus-cdt6c/kube-multus" id=0dbe4936-d207-486e-b66c-9f5288f722b0 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.931298258Z" level=info msg="Starting container: ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5" id=3bc3dc85-06d5-4d02-929f-5a9af2ae129c name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.937795241Z" level=info msg="Started container" PID=87830 containerID=ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5 description=openshift-multus/multus-cdt6c/kube-multus id=3bc3dc85-06d5-4d02-929f-5a9af2ae129c name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.942237718Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_9b1df356-d780-4a30-be17-853775615879\"" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.952477279Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.952497246Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.962542448Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.972080420Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.972097267Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:03.972107301Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_9b1df356-d780-4a30-be17-853775615879\"" Jan 23 16:56:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:04.781930 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5} Jan 23 16:56:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:05.996187 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:56:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:05.996707 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:56:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:16.996796 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:56:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:16.997449 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.035745177Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.035789215Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount has successfully entered the 'dead' state. Jan 23 16:56:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount has successfully entered the 'dead' state. Jan 23 16:56:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-98aa1b61\x2dc56a\x2d43d4\x2dab31\x2d9370f21a8490.mount has successfully entered the 'dead' state. Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.071410680Z" level=info msg="runSandbox: deleting pod ID d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90 from idIndex" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.071440288Z" level=info msg="runSandbox: removing pod sandbox d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.071456776Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.071479297Z" level=info msg="runSandbox: unmounting shmPath for sandbox d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.079438191Z" level=info msg="runSandbox: removing pod sandbox from storage: d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.082992135Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:18.083011265Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0a991596-266f-43c0-94d2-cfca82481826 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:18.083152 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:18.083199 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:18.083267 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:18.083317 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d2bf19918dc3c38406a8185e57e51eacd25b1cd35b51d854d179de7cdf267f90): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.033760869Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.033809251Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount has successfully entered the 'dead' state. Jan 23 16:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount has successfully entered the 'dead' state. Jan 23 16:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-479e28db\x2db52c\x2d4890\x2d886b\x2de2423494d70d.mount has successfully entered the 'dead' state. Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.081408035Z" level=info msg="runSandbox: deleting pod ID a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099 from idIndex" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.081437638Z" level=info msg="runSandbox: removing pod sandbox a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.081455797Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.081474109Z" level=info msg="runSandbox: unmounting shmPath for sandbox a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.089446327Z" level=info msg="runSandbox: removing pod sandbox from storage: a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.092938640Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:22.092959934Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e369ff48-beba-49a0-b2f6-c7ae13fa2d2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:22.093166 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:22.093212 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:22.093233 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:22.093285 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a961eae08a822bcb2be1a15ed45edf412489a78a08af0a67c2ccd7f228040099): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.037233235Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.037272787Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount has successfully entered the 'dead' state. Jan 23 16:56:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount has successfully entered the 'dead' state. Jan 23 16:56:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-483813b1\x2d47b4\x2d4c5d\x2d8e27\x2d87aacb3f643e.mount has successfully entered the 'dead' state. Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.080308818Z" level=info msg="runSandbox: deleting pod ID 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d from idIndex" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.080337163Z" level=info msg="runSandbox: removing pod sandbox 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.080351268Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.080364499Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.092450788Z" level=info msg="runSandbox: removing pod sandbox from storage: 87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.096084151Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.096102767Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=912fade1-cc44-49d0-a816-19039b6ded51 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.096330 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.096377 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.096404 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.096457 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f7e99caa1d54cfa4ab845fffde363b9316f19f0dcb1e808a9a2613d5c8484d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.708222156Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.708253112Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.708991466Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.709018384Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.710038124Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.710071371Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount has successfully entered the 'dead' state. Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.714366838Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.714412587Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.715614758Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.715646982Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.765332645Z" level=info msg="runSandbox: deleting pod ID 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6 from idIndex" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.765356417Z" level=info msg="runSandbox: removing pod sandbox 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.765369405Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.765379813Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.766329041Z" level=info msg="runSandbox: deleting pod ID 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0 from idIndex" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.766355363Z" level=info msg="runSandbox: removing pod sandbox 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.766370350Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.766332086Z" level=info msg="runSandbox: deleting pod ID 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe from idIndex" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.766451239Z" level=info msg="runSandbox: removing pod sandbox 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.768693753Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.768798588Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.769051616Z" level=info msg="runSandbox: unmounting shmPath for sandbox 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773340712Z" level=info msg="runSandbox: deleting pod ID 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2 from idIndex" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773376429Z" level=info msg="runSandbox: removing pod sandbox 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773397048Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773413247Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773347360Z" level=info msg="runSandbox: deleting pod ID 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802 from idIndex" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773531924Z" level=info msg="runSandbox: removing pod sandbox 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773549904Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.773567449Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.781423830Z" level=info msg="runSandbox: removing pod sandbox from storage: 13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.784736475Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.784754296Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=12b1a467-d891-43c6-8311-e80295df75bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.784957 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.785000 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.785023 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.785071 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.785464169Z" level=info msg="runSandbox: removing pod sandbox from storage: 4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.785484801Z" level=info msg="runSandbox: removing pod sandbox from storage: 61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.785530628Z" level=info msg="runSandbox: removing pod sandbox from storage: 8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.788895735Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.788915086Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=af348be5-d629-4f74-8eec-72832453a977 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.789108 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.789144 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.789165 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.789204 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.791926668Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.791944974Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=108aa8cc-98c9-4cc8-bf8d-0c35ec4bd718 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.792056 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.792084 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.792105 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.792139 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.792466409Z" level=info msg="runSandbox: removing pod sandbox from storage: 9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.794931608Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.794949807Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=bcd38c05-8897-4530-a8e7-295030ad8817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.795218 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.795249 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.795269 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.795308 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.797911012Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.797927103Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4df11c82-7857-4c76-b367-3580fad34d3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.798109 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.798140 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.798160 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:25.798215 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:25.818364 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:25.818552 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.818647717Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.818679139Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:25.818694 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:25.818713 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:56:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:25.818776 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.818776729Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.818805596Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819034061Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819051173Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819081116Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819106601Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819051952Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.819171174Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.843949083Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/05ac5e4f-114e-4bcd-a400-2cb4a9051ad5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.844150525Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.845449826Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/0abb3f86-9e19-4c95-8b1e-53bac4f12bf7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.845471504Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.848148167Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3dc39189-3776-41ae-9867-8ad2e3628ba8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.848174495Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.850003928Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/46451476-427e-43dc-b7c9-21f8bddd402a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.850029868Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.851944175Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/69f6e09b-f720-4543-a0eb-3bb2ee1525ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:25.851966448Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-14a66fc4\x2de09e\x2d44d6\x2db9a2\x2d3ca11abf8f97.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bd032caa\x2d085e\x2d4f41\x2d9a2d\x2d847ab13ee905.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9c0a21adee838753fd6d7164701fae5b176e1671c99f37f97534a18a502c47e2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c583de43\x2d61e3\x2d4769\x2db51f\x2d54b20bfd36f8.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f80cf19a\x2df329\x2d4279\x2db9cc\x2d028a4ab9a5c8.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-eba1d058\x2dae7b\x2d4ea0\x2dbfbf\x2dfb1c4938ea1c.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8ee5ad6dd13b72541663429c2b13ce6b050f00c910cf61324682918cd5e6d802-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-61d78520b138d94abe04beb20ff84d5621a015b31748f3c76acf7e30fd8016b0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-13a941efc7f023b4f0c1ce837a278a65254f6dcb459f843134c1efb68c4390fe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4fd66ca8b46b975458a394ecd3724ff5beb0743850d0d98304dc27cc4a2708f6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.034029119Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.034070245Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount has successfully entered the 'dead' state. Jan 23 16:56:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount has successfully entered the 'dead' state. Jan 23 16:56:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cea206dc\x2da082\x2d4e5f\x2d81ed\x2db630b336f6aa.mount has successfully entered the 'dead' state. Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.074283773Z" level=info msg="runSandbox: deleting pod ID 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee from idIndex" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.074313780Z" level=info msg="runSandbox: removing pod sandbox 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.074332357Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.074346840Z" level=info msg="runSandbox: unmounting shmPath for sandbox 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.086456690Z" level=info msg="runSandbox: removing pod sandbox from storage: 327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.089379548Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:27.089398551Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=87c4eb64-b062-45c9-a831-ce10b77b3e81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:27.089647 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:27.089698 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:27.089724 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:27.089776 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(327069080c7fc07b4f758ea66ab8353968dc1215662f689ff8c2a8f4ca3a4bee): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876866 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876887 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876893 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876900 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876905 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876910 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:27.876916 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.141803618Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.899042432Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-8sqsv_openshift-multus_8acc2969-4d66-4e71-9dde-218ccafac14e_0(d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a): error removing pod openshift-multus_cni-sysctl-allowlist-ds-8sqsv from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-8sqsv/8acc2969-4d66-4e71-9dde-218ccafac14e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.899087979Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount has successfully entered the 'dead' state. Jan 23 16:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount has successfully entered the 'dead' state. Jan 23 16:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0f5fbc1b\x2d3b37\x2d48d7\x2d9343\x2d173332f77187.mount has successfully entered the 'dead' state. Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.933285470Z" level=info msg="runSandbox: deleting pod ID d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a from idIndex" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.933315830Z" level=info msg="runSandbox: removing pod sandbox d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.933332325Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.933348337Z" level=info msg="runSandbox: unmounting shmPath for sandbox d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.946469874Z" level=info msg="runSandbox: removing pod sandbox from storage: d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.950111529Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-8sqsv_openshift-multus_8acc2969-4d66-4e71-9dde-218ccafac14e_0" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:28.950132091Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-8sqsv_openshift-multus_8acc2969-4d66-4e71-9dde-218ccafac14e_0" id=ad8600ab-a6c4-42e4-96cf-e3f95f8aa0b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:28.950385 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-8sqsv_openshift-multus_8acc2969-4d66-4e71-9dde-218ccafac14e_0(d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a): error adding pod openshift-multus_cni-sysctl-allowlist-ds-8sqsv to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-8sqsv/8acc2969-4d66-4e71-9dde-218ccafac14e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:28.950435 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-8sqsv_openshift-multus_8acc2969-4d66-4e71-9dde-218ccafac14e_0(d69f5b00fb92d081c32f7ab7d9770be64955f48f091d54739e0f399f6912e98a): error adding pod openshift-multus_cni-sysctl-allowlist-ds-8sqsv to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-8sqsv/8acc2969-4d66-4e71-9dde-218ccafac14e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-8sqsv" Jan 23 16:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:28.996440 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:28.996942 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904607 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready\") pod \"8acc2969-4d66-4e71-9dde-218ccafac14e\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904748 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist\") pod \"8acc2969-4d66-4e71-9dde-218ccafac14e\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904770 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir\") pod \"8acc2969-4d66-4e71-9dde-218ccafac14e\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:56:29.904784 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/8acc2969-4d66-4e71-9dde-218ccafac14e/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904815 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready" (OuterVolumeSpecName: "ready") pod "8acc2969-4d66-4e71-9dde-218ccafac14e" (UID: "8acc2969-4d66-4e71-9dde-218ccafac14e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904790 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdss6\" (UniqueName: \"kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6\") pod \"8acc2969-4d66-4e71-9dde-218ccafac14e\" (UID: \"8acc2969-4d66-4e71-9dde-218ccafac14e\") " Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904919 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8acc2969-4d66-4e71-9dde-218ccafac14e-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.904925 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8acc2969-4d66-4e71-9dde-218ccafac14e" (UID: "8acc2969-4d66-4e71-9dde-218ccafac14e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 16:56:29.904947 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/8acc2969-4d66-4e71-9dde-218ccafac14e/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.905047 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8acc2969-4d66-4e71-9dde-218ccafac14e" (UID: "8acc2969-4d66-4e71-9dde-218ccafac14e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:56:29 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-8acc2969\x2d4d66\x2d4e71\x2d9dde\x2d218ccafac14e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdss6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-8acc2969\x2d4d66\x2d4e71\x2d9dde\x2d218ccafac14e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdss6.mount has successfully entered the 'dead' state. Jan 23 16:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:29.915692 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6" (OuterVolumeSpecName: "kube-api-access-wdss6") pod "8acc2969-4d66-4e71-9dde-218ccafac14e" (UID: "8acc2969-4d66-4e71-9dde-218ccafac14e"). InnerVolumeSpecName "kube-api-access-wdss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:30 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice. -- Subject: Unit kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice has finished shutting down. Jan 23 16:56:30 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-pod8acc2969_4d66_4e71_9dde_218ccafac14e.slice completed and consumed the indicated resources. Jan 23 16:56:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:30.005103 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8acc2969-4d66-4e71-9dde-218ccafac14e-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:56:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:30.005122 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8acc2969-4d66-4e71-9dde-218ccafac14e-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:56:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:30.005130 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-wdss6\" (UniqueName: \"kubernetes.io/projected/8acc2969-4d66-4e71-9dde-218ccafac14e-kube-api-access-wdss6\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 16:56:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:30.836865 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-8sqsv] Jan 23 16:56:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:30.838675 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-8sqsv] Jan 23 16:56:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:31.999656 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8acc2969-4d66-4e71-9dde-218ccafac14e path="/var/lib/kubelet/pods/8acc2969-4d66-4e71-9dde-218ccafac14e/volumes" Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.035314754Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.035361166Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.036232503Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.036274192Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.083297799Z" level=info msg="runSandbox: deleting pod ID 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f from idIndex" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.083324810Z" level=info msg="runSandbox: removing pod sandbox 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.083346475Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.083365847Z" level=info msg="runSandbox: unmounting shmPath for sandbox 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.087292954Z" level=info msg="runSandbox: deleting pod ID 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2 from idIndex" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.087322146Z" level=info msg="runSandbox: removing pod sandbox 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.087338864Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.087353366Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.095453213Z" level=info msg="runSandbox: removing pod sandbox from storage: 06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.099022456Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.099043501Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1daf667a-8cc8-412c-adcd-ff4d7f898a25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.099273 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.099315 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.099338 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.099387 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.103465200Z" level=info msg="runSandbox: removing pod sandbox from storage: 61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.106872227Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.106892158Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=a91f14b4-98ea-4e2b-80b1-77dafab1484d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.107074 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.107108 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.107130 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:32.107175 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d6b023a8\x2d1405\x2d4ce6\x2db970\x2d8f754d245373.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-403733a4\x2d39f0\x2d4a54\x2dbf46\x2d6e0221d2fd36.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-61e3aebb6d95f351df910a71685175884bc92dc0ab641dd69a08e4ae8038ddc2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-06587549ef52ccb5b7a6f86a0e8c476f5f56f3f0c4ae346425ec5458e7a2ae1f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:32.995644 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.996005397Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:32.996048673Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:33.007624801Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d919ae96-412e-4826-8f7b-4c3b88f9de75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:33.007647130Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.031427556Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.031464631Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount has successfully entered the 'dead' state. Jan 23 16:56:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount has successfully entered the 'dead' state. Jan 23 16:56:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-506c1408\x2d374f\x2d4a53\x2db76f\x2d1dcab3ec1b79.mount has successfully entered the 'dead' state. Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.074284431Z" level=info msg="runSandbox: deleting pod ID 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff from idIndex" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.074309480Z" level=info msg="runSandbox: removing pod sandbox 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.074322502Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.074336587Z" level=info msg="runSandbox: unmounting shmPath for sandbox 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.086422307Z" level=info msg="runSandbox: removing pod sandbox from storage: 27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.089386967Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:34.089407806Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=4a863a19-5a7c-4caf-8b64-9cb3f144edee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:34.089622 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:34.089678 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:56:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:34.089706 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:56:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:34.089766 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(27da3af2b8a70f97767544d10c0f6d95b8ecf49d9dbe8e48c12b403870fb31ff): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.030810812Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.031000367Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount has successfully entered the 'dead' state. Jan 23 16:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount has successfully entered the 'dead' state. Jan 23 16:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e3a2ad9a\x2d1f8d\x2d42f3\x2d88ec\x2d1a2b153823f9.mount has successfully entered the 'dead' state. Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.081413905Z" level=info msg="runSandbox: deleting pod ID 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b from idIndex" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.081443803Z" level=info msg="runSandbox: removing pod sandbox 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.081458816Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.081479116Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.093426667Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.096739227Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:35.096761931Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=8683ac6d-321c-4c3e-aab4-6ab463300cb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:35.096963 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:35.097012 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:35.097039 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:35.097093 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(3ca57b94eae955ac3fb225693820ca29daa11f4969da5edf556ee6c5f537a04b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.041375049Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.041412756Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.041853402Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.041891913Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount has successfully entered the 'dead' state. Jan 23 16:56:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount has successfully entered the 'dead' state. Jan 23 16:56:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount has successfully entered the 'dead' state. Jan 23 16:56:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount has successfully entered the 'dead' state. Jan 23 16:56:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-73661a5b\x2d6791\x2d4790\x2db23b\x2df0d78fa720e9.mount has successfully entered the 'dead' state. Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.097281265Z" level=info msg="runSandbox: deleting pod ID 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88 from idIndex" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.097308626Z" level=info msg="runSandbox: removing pod sandbox 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.097324098Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.097337920Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.105309695Z" level=info msg="runSandbox: deleting pod ID 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e from idIndex" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.105338491Z" level=info msg="runSandbox: removing pod sandbox 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.105353292Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.105365494Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.113432517Z" level=info msg="runSandbox: removing pod sandbox from storage: 0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.116928702Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.116946305Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1ff69815-3077-4d15-85e7-28aa5e20c33e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.117151 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.117193 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.117219 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.117267 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.121433580Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.124676205Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.124694801Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ae05bede-2115-4f61-967e-02acc39ed937 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.124883 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.124915 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.124936 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:36.124976 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:56:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:36.995700 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.996121095Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:36.996171626Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.011033439Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c342c7b5-72fa-4cdd-8236-8a3f612e8170 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.011061084Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.035921293Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.035955319Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.039751714Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.039782991Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-75d5bd89\x2d7f27\x2d4648\x2da878\x2d1bf3bba45d47.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0b5dff90ba1b0016d5b149743f332fe6ef92359de387786dc551a78f0bebea7e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0d197583a85eb421e39b9fdff8c3aeee167df214321353fb1d5563377da44f88-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b3499a95\x2dd9a9\x2d4a50\x2db9a7\x2d8099ed306942.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5e1b7bf8\x2daa90\x2d4cee\x2d8a4a\x2d6a5967bc5df1.mount has successfully entered the 'dead' state. Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.083277242Z" level=info msg="runSandbox: deleting pod ID e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825 from idIndex" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.083301636Z" level=info msg="runSandbox: removing pod sandbox e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.083316289Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.083327130Z" level=info msg="runSandbox: unmounting shmPath for sandbox e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.087397144Z" level=info msg="runSandbox: deleting pod ID d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553 from idIndex" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.087422954Z" level=info msg="runSandbox: removing pod sandbox d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.087437044Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.087448827Z" level=info msg="runSandbox: unmounting shmPath for sandbox d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.099458216Z" level=info msg="runSandbox: removing pod sandbox from storage: e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.099457990Z" level=info msg="runSandbox: removing pod sandbox from storage: d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.102306468Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.102324989Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=742f6c26-db23-4397-89c5-44198dcd52b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.102558 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.102602 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.102625 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.102675 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.105721206Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.105743815Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8943890e-0339-4a4d-98ca-30fa2e9c0c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.105928 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.105971 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.105994 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:37.106040 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:56:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:37.996984 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.997324633Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:37.997365377Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:38.007543062Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/14736abd-9ef3-4b0e-8533-7a151567c628 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:38.007566002Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d754939844d01620b50f9e036094f9c94268a6a5262db70d34059c135c679553-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e1c024aa27337cbacf49750ff162b497d16269ff7dfd9fe44490ba5f3b637825-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:56:40 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00099|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 16:56:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:40.996264 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:56:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:40.996620031Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:40.996661313Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:41.008148535Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f86e69d2-c661-4253-a503-1427c51585f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:41.008175456Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:43.997073 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:56:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:43.997645 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:56:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:44.996084 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:44.996351803Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:44.996391678Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:45.007094681Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d3e860a1-bf2f-456b-8a61-eb52a655788f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:45.007124450Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:46.996289 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:46.996790229Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:46.996828100Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:47.007484188Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/83a34572-0196-4076-8063-b27c43ac6fed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:47.007502862Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:47.997099 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:47.997413502Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:47.997446788Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.007912895Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/84181b1d-6adb-4f4e-86f5-8432955c27a9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.007931171Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:48.995847 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:48.995976 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.996212703Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.996253916Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.996290994Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:48.996262091Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:49.016093486Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/994ced09-1d3a-4332-bac9-6cb01522c6a5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:49.016119231Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:49.016095164Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/23e6f8c4-b7d9-4575-8bbe-a62dc891da3c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:49.016179539Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:50.996392 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:56:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:50.996593 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:50.996709 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.996700491Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.996739314Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.996837773Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.996871619Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.996961951Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:50.997004448Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.018023849Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/6e74cc40-8a30-4bd4-b005-3375fe196a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.018049004Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.019081892Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/856e6ba0-0058-4797-bc3c-420e3499b176 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.019102507Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.019738559Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/0e4d98a1-5f09-4276-8bca-42c97446d74b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:51.019760869Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:56:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:56:54.996867 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:56:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:56:54.997425 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:56:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:56:58.143135375Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:57:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:06.996225 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:57:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:06.996860 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.858718240Z" level=info msg="NetworkStart: stopping network for sandbox a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.858906634Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/05ac5e4f-114e-4bcd-a400-2cb4a9051ad5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.858930990Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.858937464Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.858943791Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.861862258Z" level=info msg="NetworkStart: stopping network for sandbox ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862008356Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3dc39189-3776-41ae-9867-8ad2e3628ba8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862036642Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862045620Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862052683Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862593598Z" level=info msg="NetworkStart: stopping network for sandbox 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862698217Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/0abb3f86-9e19-4c95-8b1e-53bac4f12bf7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862717720Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862724799Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.862730743Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.863199897Z" level=info msg="NetworkStart: stopping network for sandbox 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.863334956Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/46451476-427e-43dc-b7c9-21f8bddd402a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.863357549Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.863364810Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.863370794Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.865006451Z" level=info msg="NetworkStart: stopping network for sandbox 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.865110637Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/69f6e09b-f720-4543-a0eb-3bb2ee1525ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.865129938Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.865136154Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:10.865141594Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:18.021448927Z" level=info msg="NetworkStart: stopping network for sandbox dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:18.021602872Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d919ae96-412e-4826-8f7b-4c3b88f9de75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:18.021628960Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:18.021637684Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:18.021647233Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:21.996951 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:57:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:21.997566 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:57:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:22.023848042Z" level=info msg="NetworkStart: stopping network for sandbox 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:22.023992129Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c342c7b5-72fa-4cdd-8236-8a3f612e8170 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:22.024018680Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:22.024025232Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:22.024031280Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:23.021049423Z" level=info msg="NetworkStart: stopping network for sandbox f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:23.021202592Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/14736abd-9ef3-4b0e-8533-7a151567c628 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:23.021235408Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:23.021243563Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:23.021251204Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:26.020717320Z" level=info msg="NetworkStart: stopping network for sandbox 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:26.020873266Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f86e69d2-c661-4253-a503-1427c51585f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:26.020898403Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:26.020906059Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:26.020912037Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877261 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877281 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877288 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877294 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877302 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877308 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:27.877316 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:57:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:28.142366551Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:57:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:30.020172168Z" level=info msg="NetworkStart: stopping network for sandbox b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:30.020393240Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d3e860a1-bf2f-456b-8a61-eb52a655788f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:30.020419867Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:30.020427031Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:30.020433890Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:32.020498792Z" level=info msg="NetworkStart: stopping network for sandbox ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:32.020649023Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/83a34572-0196-4076-8063-b27c43ac6fed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:32.020673982Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:32.020682111Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:32.020689252Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:33.020153790Z" level=info msg="NetworkStart: stopping network for sandbox e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:33.020303251Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/84181b1d-6adb-4f4e-86f5-8432955c27a9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:33.020327491Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:33.020335216Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:33.020342160Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.029931528Z" level=info msg="NetworkStart: stopping network for sandbox bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.029959728Z" level=info msg="NetworkStart: stopping network for sandbox ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030073303Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/23e6f8c4-b7d9-4575-8bbe-a62dc891da3c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030085881Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/994ced09-1d3a-4332-bac9-6cb01522c6a5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030095974Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030111320Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030117100Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030126148Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030118537Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:34.030193222Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031680252Z" level=info msg="NetworkStart: stopping network for sandbox 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031841762Z" level=info msg="NetworkStart: stopping network for sandbox 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031847295Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/6e74cc40-8a30-4bd4-b005-3375fe196a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031926645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031934793Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031944028Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031976656Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/856e6ba0-0058-4797-bc3c-420e3499b176 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.031998417Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.032005772Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.032011691Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.033118669Z" level=info msg="NetworkStart: stopping network for sandbox daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.033226228Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/0e4d98a1-5f09-4276-8bca-42c97446d74b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.033245234Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.033252004Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.033257703Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:36.996526 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.997234889Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=84dbb30e-e869-4732-adbb-c6617d69a648 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.997360450Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=84dbb30e-e869-4732-adbb-c6617d69a648 name=/runtime.v1.ImageService/ImageStatus Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.997870758Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=85ccb7ee-d399-4434-ba9f-a8a3c61d7c1d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.997999598Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=85ccb7ee-d399-4434-ba9f-a8a3c61d7c1d name=/runtime.v1.ImageService/ImageStatus Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.998833198Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=658ece98-2276-4d98-80bf-44eb4ddba520 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:57:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:36.998896666Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope. -- Subject: Unit crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4. -- Subject: Unit crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has finished starting up. -- -- The start-up result is done. Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.108958823Z" level=info msg="Created container d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=658ece98-2276-4d98-80bf-44eb4ddba520 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.109462452Z" level=info msg="Starting container: d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" id=2343196c-b021-461f-acc3-88f01269fbc8 name=/runtime.v1.RuntimeService/StartContainer Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.129193199Z" level=info msg="Started container" PID=90689 containerID=d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=2343196c-b021-461f-acc3-88f01269fbc8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.133637984Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.144477985Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.144494197Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.144511674Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.153588574Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.153604284Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.153613236Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.162220491Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.162235466Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.162244515Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.170021309Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.170041337Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 16:57:37 hub-master-0.workload.bos2.lab conmon[90677]: conmon d7dcfbc532e91c4b1ab8 : container 90689 exited with status 1 Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has successfully entered the 'dead' state. Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope: Consumed 546ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope completed and consumed the indicated resources. Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope has successfully entered the 'dead' state. Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope: Consumed 47ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4.scope completed and consumed the indicated resources. Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.963679 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/187.log" Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.964198 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/186.log" Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.965233 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" exitCode=1 Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.965258 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4} Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.965282 8631 scope.go:115] "RemoveContainer" containerID="5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:37.966310 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:57:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:37.966137189Z" level=info msg="Removing container: 5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93" id=990a16ea-49c0-4901-baa1-a293e8d02933 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:57:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:37.966838 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:57:37 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-f3dad260a848c645f72b156b5687838dbb1fcc6169f364f6ec034b52852e094a-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-f3dad260a848c645f72b156b5687838dbb1fcc6169f364f6ec034b52852e094a-merged.mount has successfully entered the 'dead' state. Jan 23 16:57:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:38.011694773Z" level=info msg="Removed container 5d2a37d32defb2ee58b1adfbb28f6cde0bd03339008de1c7ce779bd75a5f8c93: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=990a16ea-49c0-4901-baa1-a293e8d02933 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1187] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1192] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1193] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1195] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1199] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:57:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493058.1204] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:57:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:38.968289 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/187.log" Jan 23 16:57:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493059.6462] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:57:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:40.667698 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 16:57:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:40.668741 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:57:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:40.669273 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:57:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:54.996306 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:57:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:54.996809 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.870138167Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.870391178Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873072465Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873115819Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873495096Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873524726Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873554096Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.873595545Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.875910113Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.875945301Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount has successfully entered the 'dead' state. Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915349668Z" level=info msg="runSandbox: deleting pod ID ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057 from idIndex" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915350879Z" level=info msg="runSandbox: deleting pod ID a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805 from idIndex" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915406797Z" level=info msg="runSandbox: removing pod sandbox a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915422555Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915438801Z" level=info msg="runSandbox: unmounting shmPath for sandbox a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915447990Z" level=info msg="runSandbox: removing pod sandbox ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915475512Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.915490837Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920292577Z" level=info msg="runSandbox: deleting pod ID 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080 from idIndex" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920315198Z" level=info msg="runSandbox: deleting pod ID 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3 from idIndex" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920348306Z" level=info msg="runSandbox: removing pod sandbox 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920361062Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920373112Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920318591Z" level=info msg="runSandbox: removing pod sandbox 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920422217Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.920441069Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.921311652Z" level=info msg="runSandbox: deleting pod ID 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e from idIndex" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.921336223Z" level=info msg="runSandbox: removing pod sandbox 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.921348785Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.921362810Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.928467146Z" level=info msg="runSandbox: removing pod sandbox from storage: ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.928493432Z" level=info msg="runSandbox: removing pod sandbox from storage: a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.932147298Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.932169322Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=53b2f4cb-f3a3-448e-9cc9-ecaafd4b5d00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.932432 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.932480 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.932504 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.932554 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.932464127Z" level=info msg="runSandbox: removing pod sandbox from storage: 7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.932490198Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.932499985Z" level=info msg="runSandbox: removing pod sandbox from storage: 9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.935627056Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.935646216Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=0688498b-79f8-4512-be95-20c4214fccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.935842 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.935876 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.935895 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.935931 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.938800179Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.938819217Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=684b1634-8db3-4fec-9371-933884b67def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.939088 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.939119 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.939140 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.939178 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.941810800Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.941829142Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d7caf3a2-7601-4c21-a109-5a01b9f412cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.942078 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.942110 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.942130 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.942168 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.944842059Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.944859053Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=db78989c-cef3-4fd8-a2de-67c3d940d963 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.945032 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.945065 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.945086 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:57:55.945121 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:55.998183 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:55.998266 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:55.998395 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:55.998524 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:57:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:57:55.998533 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998662720Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998692565Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998782276Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998808469Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998860164Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998887212Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.998973450Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.999001037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.999089545Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:57:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:55.999134896Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.028095910Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f6071517-9ebd-4e4f-9892-5531e4137166 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.028122287Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.031245762Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/570f6851-65e3-494d-8e38-22e2e11b581b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.031270193Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.033055070Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/60841a3b-4199-456d-abae-5d5dcc1e321a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.033074796Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.034545129Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/36bce5e8-6dfd-4b12-b5c2-e7a0be85a704 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.034569231Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.035266843Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/643826c9-7133-4a98-9889-8e06a541446e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:57:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:56.035292450Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-69f6e09b\x2df720\x2d4543\x2da0eb\x2d3bb2ee1525ef.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-46451476\x2d427e\x2d43dc\x2db7c9\x2d21f8bddd402a.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3dc39189\x2d3776\x2d41ae\x2d9867\x2d8ad2e3628ba8.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0abb3f86\x2d9e19\x2d4c95\x2d8b1e\x2d53bac4f12bf7.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3ea3157eaf8cf51dbc91dbf7b387f64194be523c91118955be1e5c23c0c4af5e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-05ac5e4f\x2d114e\x2d4bcd\x2da400\x2d2cb4a9051ad5.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7e89e87656f8120bb0a87ec52456ff252aa91d08625130c15d90ec1f00bd5fe3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ec0c17ffb372dc912b211868e4100b443bed5a0674b460dd7bbc5b1e1b6b7057-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9a277eb83bc7f4a927231a5adcb35e8cb131c2a3d3a8ede25c28b0574bd1b080-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:57:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a136f3e572c8a1385199db8f1f0c54472fc7157916d2f7c243ddf82eed5e5805-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:57:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:57:58.142747459Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.032513928Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.032559584Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount has successfully entered the 'dead' state. Jan 23 16:58:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount has successfully entered the 'dead' state. Jan 23 16:58:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d919ae96\x2d412e\x2d4826\x2d8f7b\x2d4c3b88f9de75.mount has successfully entered the 'dead' state. Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.071349463Z" level=info msg="runSandbox: deleting pod ID dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f from idIndex" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.071377943Z" level=info msg="runSandbox: removing pod sandbox dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.071403442Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.071416322Z" level=info msg="runSandbox: unmounting shmPath for sandbox dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.083501840Z" level=info msg="runSandbox: removing pod sandbox from storage: dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.086353376Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:03.086373205Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8c7ffdcc-bf56-42f1-b38c-81f20ee1e928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:03.086538 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:03.086759 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:03.086782 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:03.086836 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(dfcbe437abba4e8333d688ece8ce8f0ee63e5766e3fad5733e42596d0a32670f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:58:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:06.996524 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:58:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:06.997056 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.035109721Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.035289145Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount has successfully entered the 'dead' state. Jan 23 16:58:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount has successfully entered the 'dead' state. Jan 23 16:58:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c342c7b5\x2d72fa\x2d4cdd\x2d8236\x2d8a3f612e8170.mount has successfully entered the 'dead' state. Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.076322144Z" level=info msg="runSandbox: deleting pod ID 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3 from idIndex" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.076352074Z" level=info msg="runSandbox: removing pod sandbox 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.076367503Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.076382529Z" level=info msg="runSandbox: unmounting shmPath for sandbox 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.093470121Z" level=info msg="runSandbox: removing pod sandbox from storage: 27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.096953518Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:07.096974204Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=614bee26-48d7-430b-a08e-49c2b6549e22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:07.097200 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:07.097244 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:58:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:07.097265 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:58:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:07.097314 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(27f6de1d8738d4804b1903874896abe821ac0ba2d0de3c1c45adecb625f150e3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.031194011Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.031251003Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount has successfully entered the 'dead' state. Jan 23 16:58:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount has successfully entered the 'dead' state. Jan 23 16:58:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-14736abd\x2d9ef3\x2d4b0e\x2d8533\x2d7a151567c628.mount has successfully entered the 'dead' state. Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.075309779Z" level=info msg="runSandbox: deleting pod ID f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53 from idIndex" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.075339331Z" level=info msg="runSandbox: removing pod sandbox f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.075357069Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.075371132Z" level=info msg="runSandbox: unmounting shmPath for sandbox f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.087469836Z" level=info msg="runSandbox: removing pod sandbox from storage: f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.091305112Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:08.091324329Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7acba694-a3dd-4f95-b1e7-3090f9b656b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:08.091520 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:08.091570 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:08.091597 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:08.091655 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f71afcc235400d33d2ecf64c450728a77296ce3db10459788402f92bc67a5b53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.031505727Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.031548194Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount has successfully entered the 'dead' state. Jan 23 16:58:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount has successfully entered the 'dead' state. Jan 23 16:58:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f86e69d2\x2dc661\x2d4253\x2da503\x2d1427c51585f5.mount has successfully entered the 'dead' state. Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.072315710Z" level=info msg="runSandbox: deleting pod ID 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2 from idIndex" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.072349151Z" level=info msg="runSandbox: removing pod sandbox 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.072367306Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.072384186Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.084411429Z" level=info msg="runSandbox: removing pod sandbox from storage: 7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.087803157Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:11.087822490Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d68c5072-89fd-4e67-b712-6582d573394e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:11.087963 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:11.088011 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:58:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:11.088034 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:58:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:11.088080 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7ba5ee3512c77a2f41477167e4b737d6fdafcb44eed04209b3b34d7559a6a0d2): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:58:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:13.996246 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:13.996682988Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:13.996739082Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:14.009661700Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/8d22475b-e405-4ed2-9dd2-508ee79d33c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:14.009685346Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.031858588Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.031903107Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount has successfully entered the 'dead' state. Jan 23 16:58:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount has successfully entered the 'dead' state. Jan 23 16:58:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d3e860a1\x2dbf2f\x2d456b\x2d8a61\x2deb52a655788f.mount has successfully entered the 'dead' state. Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.080316265Z" level=info msg="runSandbox: deleting pod ID b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2 from idIndex" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.080344615Z" level=info msg="runSandbox: removing pod sandbox b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.080362251Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.080378930Z" level=info msg="runSandbox: unmounting shmPath for sandbox b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.097463882Z" level=info msg="runSandbox: removing pod sandbox from storage: b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.100658079Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:15.100677670Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=1a824e51-a643-4421-b5ea-4a9fe182e5fc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:15.100910 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:15.100956 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:15.100977 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:15.101024 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b33f1573298a3efa7a008ad844cc3d7bcc8eb410ff9d7a43574521a1d41280c2): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.031028767Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.031067633Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount has successfully entered the 'dead' state. Jan 23 16:58:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount has successfully entered the 'dead' state. Jan 23 16:58:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-83a34572\x2d0196\x2d4076\x2d8063\x2db27c43ac6fed.mount has successfully entered the 'dead' state. Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.067277939Z" level=info msg="runSandbox: deleting pod ID ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52 from idIndex" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.067303033Z" level=info msg="runSandbox: removing pod sandbox ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.067315926Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.067327406Z" level=info msg="runSandbox: unmounting shmPath for sandbox ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.084419205Z" level=info msg="runSandbox: removing pod sandbox from storage: ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.090724559Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.090775528Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=8fdda8b2-578c-44e1-af3c-6e624966dd0d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:17.091002 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:17.091049 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:58:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:17.091073 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:58:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:17.091122 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ffcf67bcd275d2989309b969f8ed4f4a88c3ca400ac613ff86bc8c62486ecf52): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:58:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:17.997188 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.997565415Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:17.997602527Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.009346321Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/2c23d7fa-6480-47c4-8998-09cc240ae92b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.009373964Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.031284150Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.031312253Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount has successfully entered the 'dead' state. Jan 23 16:58:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount has successfully entered the 'dead' state. Jan 23 16:58:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-84181b1d\x2d6adb\x2d4f4e\x2d86f5\x2d8432955c27a9.mount has successfully entered the 'dead' state. Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.073283235Z" level=info msg="runSandbox: deleting pod ID e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d from idIndex" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.073306277Z" level=info msg="runSandbox: removing pod sandbox e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.073317428Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.073328062Z" level=info msg="runSandbox: unmounting shmPath for sandbox e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.094407302Z" level=info msg="runSandbox: removing pod sandbox from storage: e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.097243630Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:18.097260565Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=3b996e29-8745-4cf6-b3ab-b2954eb37a86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:18.097445 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:18.097488 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:58:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:18.097508 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:58:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:18.097556 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e33751ef8ee37e0ec11ab63d8a30a0eb1449f95accebe5c46fd49cb600a26f8d): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.040627769Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.040658548Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.042159736Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.042227745Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount has successfully entered the 'dead' state. Jan 23 16:58:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount has successfully entered the 'dead' state. Jan 23 16:58:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount has successfully entered the 'dead' state. Jan 23 16:58:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount has successfully entered the 'dead' state. Jan 23 16:58:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-23e6f8c4\x2db7d9\x2d4575\x2d8bbe\x2da62dc891da3c.mount has successfully entered the 'dead' state. Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.078290958Z" level=info msg="runSandbox: deleting pod ID bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4 from idIndex" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.078315433Z" level=info msg="runSandbox: removing pod sandbox bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.078328632Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.078340873Z" level=info msg="runSandbox: unmounting shmPath for sandbox bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.086344724Z" level=info msg="runSandbox: deleting pod ID ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c from idIndex" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.086372367Z" level=info msg="runSandbox: removing pod sandbox ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.086387359Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.086402308Z" level=info msg="runSandbox: unmounting shmPath for sandbox ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.099439522Z" level=info msg="runSandbox: removing pod sandbox from storage: bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.102421761Z" level=info msg="runSandbox: removing pod sandbox from storage: ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.102917706Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.102938081Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=66d21235-1787-4dbe-93d6-2980d153babd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.103186 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.103355 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.103377 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.103430 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.105897652Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:19.105915468Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=7b45d859-2a06-41fd-a2ed-2ec3b043b5ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.106130 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.106164 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.106184 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:58:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:19.106677 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 16:58:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-994ced09\x2d1d3a\x2d4332\x2dbac9\x2d6cb01522c6a5.mount has successfully entered the 'dead' state. Jan 23 16:58:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bd1d46b408adef4ed0bc49575105c4521aad01530a85d16786be87ee5c8f81c4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ea84b84e7159d11f769f39e00787ed8039f8913996e8f37705b0dbd14b40f62c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:20.995427 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:20.995791782Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:20.995842209Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.007190886Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/19a52f06-5d9a-401f-b6f5-ff417b7f93ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.007227738Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043213826Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043255368Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043491661Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043521372Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043658429Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.043699798Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-856e6ba0\x2d0058\x2d4797\x2dbc3c\x2d420e3499b176.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6e74cc40\x2d8a30\x2d4bd4\x2db005\x2d3375fe196a4b.mount has successfully entered the 'dead' state. Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.079282727Z" level=info msg="runSandbox: deleting pod ID 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3 from idIndex" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.079308551Z" level=info msg="runSandbox: removing pod sandbox 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.079322723Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.079334233Z" level=info msg="runSandbox: unmounting shmPath for sandbox 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.080317968Z" level=info msg="runSandbox: deleting pod ID 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f from idIndex" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.080341082Z" level=info msg="runSandbox: removing pod sandbox 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.080354567Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.080366024Z" level=info msg="runSandbox: unmounting shmPath for sandbox 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.083313294Z" level=info msg="runSandbox: deleting pod ID daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c from idIndex" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.083337584Z" level=info msg="runSandbox: removing pod sandbox daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.083349970Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.083361490Z" level=info msg="runSandbox: unmounting shmPath for sandbox daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.094458156Z" level=info msg="runSandbox: removing pod sandbox from storage: 646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.094461430Z" level=info msg="runSandbox: removing pod sandbox from storage: 90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.097289823Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.097308063Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=65d08e99-0746-4542-bf77-391a2a7c303d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.097535 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.097576 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.097600 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.097645 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.098436558Z" level=info msg="runSandbox: removing pod sandbox from storage: daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.100667776Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.100689261Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b2b815d3-a84a-4a71-a2d2-c78b526687b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.100905 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.100936 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.100958 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.100995 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.104700291Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:21.104735013Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=45913514-193c-4b43-aea9-de9368ee3d3c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.104944 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.104976 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.104997 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.105036 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:21.996672 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:58:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:21.997171 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:58:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e4d98a1\x2d5f09\x2d4276\x2d8bca\x2d42c97446d74b.mount has successfully entered the 'dead' state. Jan 23 16:58:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-90a74f54e8ead8da4d28f091d78b1978c00511e7082d87d4a0fb4eba12df25e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-daa72dd240900793ec3514ff43b6e0576b3c2fb56442f6c4e47fffc79912670c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-646c1c7deb807585e12e8d10d9f41f2e2b1db594d018ad5a99ae3493f076c96f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:58:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:25.996372 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:58:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:25.996789291Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:25.996839686Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:26.009316104Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8c5e8362-4f72-495c-b23d-5da3ec7d7fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:26.009338769Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877649 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877668 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877675 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877681 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877689 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877697 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:27.877703 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:58:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:28.143630547Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:58:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:28.995948 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:58:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:28.996287762Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:28.996497039Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:29.007761862Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/ffd6df03-7124-4295-8cf0-4f916aeeb7bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:29.007782896Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:29.996118 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:29.996513287Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:29.996554667Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.007867933Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/febe5b42-e797-44af-bf4b-624503c260e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.007894137Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:30.995486 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 16:58:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:30.995606 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.995866192Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.995909668Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.995933170Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:30.995965535Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.011694054Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/5cb7f8bc-402f-4e83-8bf4-11cfd55f1345 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.011714855Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.012574564Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6679479e-3f2b-4258-a22d-3defe8990c5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.012594412Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:31.996307 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.996776464Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:31.996829421Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:32.008860878Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/54b39ffb-f8c4-4342-b894-7d9b05ba1a6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:32.008883026Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:32.996464 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:32.997120471Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:32.997175855Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:33.009297090Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1afa259a-631f-47fc-bde8-a41b113b6200 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:33.009321086Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:34.996374 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 16:58:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:34.996508 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 16:58:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:34.996967285Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:34.997021323Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:34.997030883Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:34.997067425Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:58:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:34.997197 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:58:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:34.997698 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:58:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:35.014507840Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e257ec05-2bd2-4be0-9c31-5c30baf69f36 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:35.014532899Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:35.017315741Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/0e461c45-6083-45c4-869c-31cdc373e960 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:35.017333950Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043442831Z" level=info msg="NetworkStart: stopping network for sandbox 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043598315Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f6071517-9ebd-4e4f-9892-5531e4137166 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043625048Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043632150Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043639628Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.043889083Z" level=info msg="NetworkStart: stopping network for sandbox d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.044013376Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/570f6851-65e3-494d-8e38-22e2e11b581b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.044035157Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.044042052Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.044048780Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.045973662Z" level=info msg="NetworkStart: stopping network for sandbox 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046149127Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/60841a3b-4199-456d-abae-5d5dcc1e321a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046174919Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046183087Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046189983Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046607110Z" level=info msg="NetworkStart: stopping network for sandbox e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046751550Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/36bce5e8-6dfd-4b12-b5c2-e7a0be85a704 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046773674Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046779826Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.046785995Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.048236818Z" level=info msg="NetworkStart: stopping network for sandbox d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.048360306Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/643826c9-7133-4a98-9889-8e06a541446e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.048385836Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.048393227Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:41.048399684Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:58:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:45.996151 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:58:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:45.996674 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:58:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:58.143189182Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:58:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:58:58.996551 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:58:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:58:58.997054 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:58:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:59.024483406Z" level=info msg="NetworkStart: stopping network for sandbox 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:58:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:59.024647366Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/8d22475b-e405-4ed2-9dd2-508ee79d33c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:58:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:59.024675775Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:58:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:59.024686324Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:58:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:58:59.024693974Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:03.022334421Z" level=info msg="NetworkStart: stopping network for sandbox f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:03.022480379Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/2c23d7fa-6480-47c4-8998-09cc240ae92b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:03.022501479Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:03.022507993Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:03.022517254Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:06.019122214Z" level=info msg="NetworkStart: stopping network for sandbox 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:06.019279637Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/19a52f06-5d9a-401f-b6f5-ff417b7f93ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:06.019303202Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:06.019310449Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:06.019318312Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1217] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1222] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1224] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1422] dhcp4 (eno12409): canceled DHCP transaction Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1423] dhcp6 (eno12409): canceled DHCP transaction Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1435] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1438] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1438] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1440] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1443] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 16:59:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493148.1447] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:59:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493150.2836] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 16:59:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:10.996651 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:59:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:10.997347 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:59:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:11.023088062Z" level=info msg="NetworkStart: stopping network for sandbox 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:11.023241325Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8c5e8362-4f72-495c-b23d-5da3ec7d7fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:11.023267389Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:11.023274473Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:11.023282298Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:14.021245547Z" level=info msg="NetworkStart: stopping network for sandbox 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:14.021398585Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/ffd6df03-7124-4295-8cf0-4f916aeeb7bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:14.021420672Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:14.021427170Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:14.021434610Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:15.021136766Z" level=info msg="NetworkStart: stopping network for sandbox 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:15.021311128Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/febe5b42-e797-44af-bf4b-624503c260e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:15.021338742Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:15.021346308Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:15.021353430Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024185720Z" level=info msg="NetworkStart: stopping network for sandbox 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024416086Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6679479e-3f2b-4258-a22d-3defe8990c5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024440380Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024447217Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024453767Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024803300Z" level=info msg="NetworkStart: stopping network for sandbox 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024918920Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/5cb7f8bc-402f-4e83-8bf4-11cfd55f1345 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024946646Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024954029Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:16.024960381Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:17.021922802Z" level=info msg="NetworkStart: stopping network for sandbox 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:17.022064489Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/54b39ffb-f8c4-4342-b894-7d9b05ba1a6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:17.022088726Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:17.022095376Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:17.022103447Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:18.021506725Z" level=info msg="NetworkStart: stopping network for sandbox 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:18.021650185Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1afa259a-631f-47fc-bde8-a41b113b6200 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:18.021672712Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:18.021680046Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:18.021686289Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.027382647Z" level=info msg="NetworkStart: stopping network for sandbox 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.027526952Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e257ec05-2bd2-4be0-9c31-5c30baf69f36 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.027549043Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.027555891Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.027562065Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.029247174Z" level=info msg="NetworkStart: stopping network for sandbox db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.029359086Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/0e461c45-6083-45c4-869c-31cdc373e960 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.029377556Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.029383608Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 16:59:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:20.029389772Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:25.997018 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:59:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:25.997523 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.055266370Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.055309409Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.055571531Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.055597276Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.056968045Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.057012479Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.057279155Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.057308850Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.059091760Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.059123595Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount has successfully entered the 'dead' state. Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092325240Z" level=info msg="runSandbox: deleting pod ID d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371 from idIndex" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092351849Z" level=info msg="runSandbox: removing pod sandbox d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092365899Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092327923Z" level=info msg="runSandbox: deleting pod ID 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5 from idIndex" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092393895Z" level=info msg="runSandbox: removing pod sandbox 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092402830Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092413841Z" level=info msg="runSandbox: unmounting shmPath for sandbox d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.092522871Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096291315Z" level=info msg="runSandbox: deleting pod ID d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4 from idIndex" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096320292Z" level=info msg="runSandbox: removing pod sandbox d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096337004Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096347414Z" level=info msg="runSandbox: unmounting shmPath for sandbox d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096380919Z" level=info msg="runSandbox: deleting pod ID 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd from idIndex" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096409916Z" level=info msg="runSandbox: removing pod sandbox 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096426813Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096441348Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096294725Z" level=info msg="runSandbox: deleting pod ID e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d from idIndex" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096551619Z" level=info msg="runSandbox: removing pod sandbox e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096567666Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.096580533Z" level=info msg="runSandbox: unmounting shmPath for sandbox e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.101463996Z" level=info msg="runSandbox: removing pod sandbox from storage: 87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.104465685Z" level=info msg="runSandbox: removing pod sandbox from storage: e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.104466656Z" level=info msg="runSandbox: removing pod sandbox from storage: d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.104511417Z" level=info msg="runSandbox: removing pod sandbox from storage: 6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.104857582Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.104880225Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4033af6c-9175-442b-a69e-30518dcf20b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.105362 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.105405 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.105428 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.105474 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.108310324Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.108328511Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=62e7118a-a960-467f-9890-f860813491d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.108600 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.108639 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.108661 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.108702 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.112627434Z" level=info msg="runSandbox: removing pod sandbox from storage: d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.113957139Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.113991621Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=c3308e8c-4e32-4d65-a11f-5bb86111b2ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.114403 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.114434 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.114454 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.114491 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.119151089Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.119173808Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d5bbe51d-5b63-4b67-8c37-b2b373d7b13d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.119427 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.119457 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.119477 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.119513 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.122602752Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.122625501Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e1459c53-fff3-4963-b1c0-8ef0bb3ebe41 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.122872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.122904 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.122926 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:26.122975 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:26.159602 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:26.159671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:26.159735 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:26.159814 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 16:59:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:26.159942 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160023692Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160061391Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160078710Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160036993Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160144318Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160143676Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160179194Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160067275Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160210347Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.160107481Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.186652743Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/dae82b78-bbb4-4dd3-af23-643ce18beb38 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.186676169Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.186798497Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6d93cf86-e8cf-4637-a240-28ab9814d3ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.186820846Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.188829912Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/6aa11cbb-df9a-4bb7-b3e9-fb1df3199c0a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.188852182Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.191133657Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a849a3f5-1047-4153-8f91-482b2048fdd6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.191155839Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.191808688Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3e995d30-b9ce-411d-8dea-fbe369b491dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:26.191828420Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-643826c9\x2d7133\x2d4a98\x2d9889\x2d8e06a541446e.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-36bce5e8\x2d6dfd\x2d4b12\x2db5c2\x2de7a0be85a704.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-60841a3b\x2d4199\x2d456d\x2dabae\x2d5d5dcc1e321a.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-570f6851\x2d65e3\x2d494d\x2d8e38\x2d22e2e11b581b.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f6071517\x2d9ebd\x2d4e4f\x2d9892\x2d5531e4137166.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d1d8b7200bb9c983fb1924f18084b4109a76bacd60c746809cf21f7686909cb4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e6b074f664d8b7ffe99c3fa0d39823e0b90e9c61476a6a7b327ead62a8b2016d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6b919b60e08eb61b0392853b03e405af9d047c005775e7723b95cade185bebfd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d62828132b76f84854dbcb82d2de8c2600435add2128d248904f661ec48ed371-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-87d7caf2ed9c71d76093ab150dbd90d7b46275e3eaa62097d28dd6f57357dbd5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878145 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878285 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878291 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878298 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878303 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878309 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:27.878315 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 16:59:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:28.142750042Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:59:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:36.996067 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:59:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:36.996610 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.035900195Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.036091755Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount has successfully entered the 'dead' state. Jan 23 16:59:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount has successfully entered the 'dead' state. Jan 23 16:59:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8d22475b\x2de405\x2d4ed2\x2d9dd2\x2d508ee79d33c3.mount has successfully entered the 'dead' state. Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.080310141Z" level=info msg="runSandbox: deleting pod ID 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad from idIndex" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.080340711Z" level=info msg="runSandbox: removing pod sandbox 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.080356707Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.080370542Z" level=info msg="runSandbox: unmounting shmPath for sandbox 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.096462551Z" level=info msg="runSandbox: removing pod sandbox from storage: 765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.099443649Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:44.099463090Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8ec5ea0f-50ea-455d-a2a2-f159200f1fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:44.099717 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:44.099763 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:59:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:44.099791 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:59:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:44.099847 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(765c436e160dbb96a8e9ef49c6a412c8b27bdc3ab76d84ac770089cb739955ad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 16:59:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:47.996885 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 16:59:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:47.997528 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.033628186Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.033675224Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount has successfully entered the 'dead' state. Jan 23 16:59:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount has successfully entered the 'dead' state. Jan 23 16:59:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2c23d7fa\x2d6480\x2d47c4\x2d8998\x2d09cc240ae92b.mount has successfully entered the 'dead' state. Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.079310861Z" level=info msg="runSandbox: deleting pod ID f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e from idIndex" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.079337352Z" level=info msg="runSandbox: removing pod sandbox f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.079351543Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.079363319Z" level=info msg="runSandbox: unmounting shmPath for sandbox f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.099481737Z" level=info msg="runSandbox: removing pod sandbox from storage: f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.103088229Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:48.103105923Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d0db59f8-8c13-4526-a26d-ef43366d31a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:48.103396 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:48.103438 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:59:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:48.103461 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 16:59:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:48.103502 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f08e40c1863d56bdd0d0219253320afbaf0f00e3c7fff784071427381bb2385e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.030308094Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.030354084Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount has successfully entered the 'dead' state. Jan 23 16:59:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount has successfully entered the 'dead' state. Jan 23 16:59:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-19a52f06\x2d5d9a\x2d401f\x2db6f5\x2dff417b7f93ab.mount has successfully entered the 'dead' state. Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.071288553Z" level=info msg="runSandbox: deleting pod ID 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835 from idIndex" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.071313181Z" level=info msg="runSandbox: removing pod sandbox 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.071328898Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.071342822Z" level=info msg="runSandbox: unmounting shmPath for sandbox 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.087474427Z" level=info msg="runSandbox: removing pod sandbox from storage: 04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.091084311Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:51.091104292Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ed47b200-e8ff-4c74-82ae-463e9785e631 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:51.091284 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:51.091331 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:59:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:51.091354 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 16:59:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:51.091406 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(04c0085305c5e7571f9131ac754fa18882d67cadf97512f4a8850ce92b286835): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 16:59:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:55.996110 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 16:59:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:55.996513288Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:55.996562203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.009062606Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6d8c2a0e-a191-4bd1-bb04-16dcafa8e8ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.009083022Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.034216065Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.034252920Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount has successfully entered the 'dead' state. Jan 23 16:59:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount has successfully entered the 'dead' state. Jan 23 16:59:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8c5e8362\x2d4f72\x2d495c\x2db23d\x2d5da3ec7d7fb1.mount has successfully entered the 'dead' state. Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.078307032Z" level=info msg="runSandbox: deleting pod ID 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742 from idIndex" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.078329070Z" level=info msg="runSandbox: removing pod sandbox 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.078342631Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.078354158Z" level=info msg="runSandbox: unmounting shmPath for sandbox 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.094403756Z" level=info msg="runSandbox: removing pod sandbox from storage: 50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.097622429Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:56.097639609Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3a62c762-f689-475a-9179-7812b41d0591 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:56.097775 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:56.097814 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:59:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:56.097835 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 16:59:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:56.097881 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 16:59:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-50e96f70bacb4e4a0ffc80db7161ad2a95f4e136de1f6bd9c7cdcf2b2a156742-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:58.143297923Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.031365006Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.031400719Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount has successfully entered the 'dead' state. Jan 23 16:59:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount has successfully entered the 'dead' state. Jan 23 16:59:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ffd6df03\x2d7124\x2d4295\x2d8cf0\x2d4f916aeeb7bf.mount has successfully entered the 'dead' state. Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.080312250Z" level=info msg="runSandbox: deleting pod ID 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7 from idIndex" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.080336316Z" level=info msg="runSandbox: removing pod sandbox 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.080349393Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.080361660Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.096442297Z" level=info msg="runSandbox: removing pod sandbox from storage: 20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.099824920Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.099843800Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c926e1aa-f0c3-4abb-9ec9-1c2d94eaab81 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:59.100074 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 16:59:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:59.100116 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:59:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:59.100138 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 16:59:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 16:59:59.100183 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(20ae5b13b201dc2161d5ac22ff7b73b61f7fbfdc0fcdc034d31d5bdac462afc7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 16:59:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 16:59:59.995592 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.995937993Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 16:59:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 16:59:59.995974051Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.010847891Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c50bfd9b-b3d4-479c-86a1-e67e541293ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.010880596Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.031595108Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.031633238Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount has successfully entered the 'dead' state. Jan 23 17:00:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount has successfully entered the 'dead' state. Jan 23 17:00:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-febe5b42\x2de797\x2d44af\x2dbf4b\x2d624503c260e8.mount has successfully entered the 'dead' state. Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.081309039Z" level=info msg="runSandbox: deleting pod ID 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45 from idIndex" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.081333787Z" level=info msg="runSandbox: removing pod sandbox 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.081347619Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.081360167Z" level=info msg="runSandbox: unmounting shmPath for sandbox 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.101415440Z" level=info msg="runSandbox: removing pod sandbox from storage: 22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.104245843Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:00.104264038Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=69a4dc4c-2ee2-44de-84ab-f4a1bce4f8ed name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:00.104416 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:00.104463 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:00.104488 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:00.104539 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(22db7ebb96d5bbf83d639aca8200dc543c4af5b03e469192b152d928b1afca45): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.034959995Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.034995828Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.036542915Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.036570905Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6679479e\x2d3f2b\x2d4258\x2da22d\x2d3defe8990c5e.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5cb7f8bc\x2d402f\x2d4e83\x2d8bf4\x2d11cfd55f1345.mount has successfully entered the 'dead' state. Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092344159Z" level=info msg="runSandbox: deleting pod ID 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c from idIndex" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092366967Z" level=info msg="runSandbox: removing pod sandbox 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092380265Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092392197Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092469734Z" level=info msg="runSandbox: deleting pod ID 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb from idIndex" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092494756Z" level=info msg="runSandbox: removing pod sandbox 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092507846Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.092521335Z" level=info msg="runSandbox: unmounting shmPath for sandbox 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.105425411Z" level=info msg="runSandbox: removing pod sandbox from storage: 098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.109025427Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.109045214Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=11c26f8f-504c-432a-9f54-ac1278e4f37e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.109258 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.109304 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.109325 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.109371 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.110407937Z" level=info msg="runSandbox: removing pod sandbox from storage: 6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.113666367Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:01.113685393Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=3c2ed0c1-0150-4db6-9169-cb70cce32890 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.113851 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.113885 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.113905 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:00:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:01.113945 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.032968663Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.033005069Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-098617210cce46b1fa4cd89545c9119d4a93524f5b88b7d26d5d94ccb714cfeb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6884358c613388017e40586d839faf245522e9378eae35b06aad29438e04ce6c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-54b39ffb\x2df8c4\x2d4342\x2db894\x2d7d9b05ba1a6a.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.086274605Z" level=info msg="runSandbox: deleting pod ID 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711 from idIndex" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.086297957Z" level=info msg="runSandbox: removing pod sandbox 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.086310438Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.086322008Z" level=info msg="runSandbox: unmounting shmPath for sandbox 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.102433615Z" level=info msg="runSandbox: removing pod sandbox from storage: 74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.105900107Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.105920589Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=b86842d6-4322-40c2-b942-f28e71acce40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:02.106111 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:02.106156 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:02.106178 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:02.106238 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(74b3b3c31c9335818bc9ab388a60040f523b655502bf88362040608d9b322711): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:02.996107 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.996464671Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:02.996502705Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:02.996586 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:00:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:02.997096 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.007227895Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4be78172-3e61-4370-aab3-0103a59115e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.007247329Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.031608771Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.031639527Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount has successfully entered the 'dead' state. Jan 23 17:00:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount has successfully entered the 'dead' state. Jan 23 17:00:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1afa259a\x2d631f\x2d47fc\x2dbde8\x2da41b113b6200.mount has successfully entered the 'dead' state. Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.072308593Z" level=info msg="runSandbox: deleting pod ID 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea from idIndex" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.072333415Z" level=info msg="runSandbox: removing pod sandbox 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.072346054Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.072358279Z" level=info msg="runSandbox: unmounting shmPath for sandbox 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.093458107Z" level=info msg="runSandbox: removing pod sandbox from storage: 29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.096236035Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:03.096254044Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b81f9165-48af-4129-8425-9743caaaa8ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:03.096483 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:03.096522 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:03.096545 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:03.096593 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(29c49be29ab56bb0e96980b394ec4e215029feb30dbbea390837f20e48363dea): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.039582746Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.039617598Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.039948975Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.039985151Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e461c45\x2d6083\x2d45c4\x2d869c\x2d31cdc373e960.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e257ec05\x2d2bd2\x2d4be0\x2d9c31\x2d5c30baf69f36.mount has successfully entered the 'dead' state. Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100323500Z" level=info msg="runSandbox: deleting pod ID 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f from idIndex" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100352072Z" level=info msg="runSandbox: removing pod sandbox 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100366279Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100377786Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100327623Z" level=info msg="runSandbox: deleting pod ID db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c from idIndex" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100436753Z" level=info msg="runSandbox: removing pod sandbox db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100449708Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.100464323Z" level=info msg="runSandbox: unmounting shmPath for sandbox db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.121457189Z" level=info msg="runSandbox: removing pod sandbox from storage: 1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.121474896Z" level=info msg="runSandbox: removing pod sandbox from storage: db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.125092399Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.125118225Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=831c4ee1-518b-42b6-9b02-8ed2b2db9456 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.125587 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.125632 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.125655 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.125703 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.128261727Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:05.128282390Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=ba55525b-b29b-44ca-8c96-174958cb38f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.128516 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.128559 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.128582 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:05.128635 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:00:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-db42885a49bece9fc60c664592b7f399f11c8244d607661f730f64a0719a1f4c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1fdaae49dbffb2e253497a0447f5e265bc1a1c80f64bb1cede57462a0c1d2d1f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:09.996432 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:00:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:09.997000311Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:09.997051975Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:10.008550483Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/0fecc762-b297-40ae-b159-56c26e234f43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:10.008574071Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:10.995871 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:10.996201345Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:10.996248331Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.006642785Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/9448fa4e-c9e9-45d9-8592-f8ad5e512966 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.006663385Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.199712525Z" level=info msg="NetworkStart: stopping network for sandbox f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.199866926Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/dae82b78-bbb4-4dd3-af23-643ce18beb38 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.199891164Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.199898526Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.199905712Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201250037Z" level=info msg="NetworkStart: stopping network for sandbox eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201391342Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/6aa11cbb-df9a-4bb7-b3e9-fb1df3199c0a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201411817Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201419216Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201425582Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201558730Z" level=info msg="NetworkStart: stopping network for sandbox 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201671262Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6d93cf86-e8cf-4637-a240-28ab9814d3ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201694368Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201702091Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.201709982Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.203676659Z" level=info msg="NetworkStart: stopping network for sandbox e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.203786034Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a849a3f5-1047-4153-8f91-482b2048fdd6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.203806617Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.203813048Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.203820343Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.204374667Z" level=info msg="NetworkStart: stopping network for sandbox 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.204486438Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3e995d30-b9ce-411d-8dea-fbe369b491dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.204511374Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.204518671Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.204526372Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:11.995895 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:00:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:11.996131 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.996191873Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.996246649Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.996634810Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:11.996681907Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:12.017871012Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b4cbaa37-be57-479d-b7ee-a46de5205b88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:12.017897282Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:12.018475847Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a940b28d-79f3-42ee-890e-fc02dfee97c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:12.018500811Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:14.995622 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:00:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:14.995767 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:00:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:14.996035741Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:14.996252950Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:14.996063358Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:14.996512706Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.011256380Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/88c266d8-5595-44b7-9f9e-58a2db430f54 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.011278565Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.012159287Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ef6fd0fe-c2f1-446f-bcd2-1b093ed40782 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.012182453Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:15.996269 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.996565750Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:15.996614522Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:15.997126 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:00:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:15.997650 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.008940487Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/b59047f5-c12f-4afd-bc11-ad5cdcb7eebd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.008961025Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:16.995688 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:00:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:16.995737 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.996062898Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.996111513Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.996132710Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:16.996173791Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:17.010772535Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/31e5a0f0-56f0-4044-9e11-337f17250661 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:17.010794486Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:17.012370899Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/66b51b1e-1a13-4798-b648-b4d853770b23 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:17.012393967Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879378 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879399 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879406 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879413 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879421 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879428 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:27.879435 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:00:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:27.887850669Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=d7a74d5c-563e-4fec-9236-5a19f82faf39 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:00:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:27.887974717Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d7a74d5c-563e-4fec-9236-5a19f82faf39 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:00:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:28.143351701Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:00:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:29.997151 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:00:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:29.997823 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:00:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493238.1239] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:00:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493238.1244] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:00:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493238.1246] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:00:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493238.1261] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:00:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493238.1262] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:00:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:41.022264108Z" level=info msg="NetworkStart: stopping network for sandbox da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:41.022675704Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6d8c2a0e-a191-4bd1-bb04-16dcafa8e8ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:41.022700431Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:41.022707247Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:41.022716524Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:44.996544 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:00:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:44.997047 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:00:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:45.024215558Z" level=info msg="NetworkStart: stopping network for sandbox 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:45.024382957Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c50bfd9b-b3d4-479c-86a1-e67e541293ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:45.024412605Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:45.024420016Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:45.024426652Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:48.019197097Z" level=info msg="NetworkStart: stopping network for sandbox 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:48.019370489Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4be78172-3e61-4370-aab3-0103a59115e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:48.019392428Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:48.019400277Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:48.019407048Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:55.020983101Z" level=info msg="NetworkStart: stopping network for sandbox ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:55.021149575Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/0fecc762-b297-40ae-b159-56c26e234f43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:55.021175966Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:55.021183099Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:55.021189795Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.020503649Z" level=info msg="NetworkStart: stopping network for sandbox 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.020646270Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/9448fa4e-c9e9-45d9-8592-f8ad5e512966 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.020667572Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.020673824Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.020680796Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.210898594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.210935701Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.212329665Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.212356003Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.212894606Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.212926662Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.213562604Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.213595274Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.215880543Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.215906068Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount has successfully entered the 'dead' state. Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246373996Z" level=info msg="runSandbox: deleting pod ID eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149 from idIndex" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246404330Z" level=info msg="runSandbox: removing pod sandbox eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246423543Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246450642Z" level=info msg="runSandbox: unmounting shmPath for sandbox eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246377870Z" level=info msg="runSandbox: deleting pod ID f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0 from idIndex" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246495150Z" level=info msg="runSandbox: removing pod sandbox f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246508083Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246518991Z" level=info msg="runSandbox: unmounting shmPath for sandbox f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246529045Z" level=info msg="runSandbox: deleting pod ID 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4 from idIndex" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246554121Z" level=info msg="runSandbox: removing pod sandbox 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246568146Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.246580493Z" level=info msg="runSandbox: unmounting shmPath for sandbox 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.250301266Z" level=info msg="runSandbox: deleting pod ID 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2 from idIndex" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.250325972Z" level=info msg="runSandbox: removing pod sandbox 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.250340216Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.250352917Z" level=info msg="runSandbox: unmounting shmPath for sandbox 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.258454783Z" level=info msg="runSandbox: removing pod sandbox from storage: 12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262064340Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262094298Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=296cb38e-47ce-4d89-b41a-09010134dc86 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262452222Z" level=info msg="runSandbox: deleting pod ID e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6 from idIndex" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262476560Z" level=info msg="runSandbox: removing pod sandbox e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262488918Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262495161Z" level=info msg="runSandbox: removing pod sandbox from storage: f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262500046Z" level=info msg="runSandbox: unmounting shmPath for sandbox e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.262395 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.262542 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.262566 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.262612 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.262510043Z" level=info msg="runSandbox: removing pod sandbox from storage: eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.266049340Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.266066823Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fd453277-2bc0-472b-9a9e-ed9ad14229e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.266272 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.266307 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.266327 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.266364 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.269600776Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.269620755Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=27b4ba67-a447-4ef6-a7f7-9e9666135aba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.269823 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.269872 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.269897 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.269943 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.270443113Z" level=info msg="runSandbox: removing pod sandbox from storage: 28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.273741432Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.273760965Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ee26cad4-bb8f-4baa-b543-986d6a0dc189 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.273958 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.273994 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.274015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.274055 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.279441597Z" level=info msg="runSandbox: removing pod sandbox from storage: e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.282617232Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.282634747Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b01d426a-b588-400c-b69a-9156b562bc6e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.282799 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.282831 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.282852 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:56.282890 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:56.327926 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:56.327969 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:56.328150 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:56.328240 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328235749Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328264316Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328364384Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328389770Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:56.328406 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328460892Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328475683Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328627619Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.328645958Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.329093886Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.329181104Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.359495829Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/92d3b76a-2710-404d-b0a4-0e60fb005fe3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.359517620Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.360266163Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/dc8aaca9-0ab8-48cb-8ac4-bad8b11147ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.360290491Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.361112600Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d316a3b5-f2c0-48bf-b2b8-20ea1b24eeba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.361134121Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.361956016Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5ae23710-3db5-45e7-8d56-07482a54d4b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.361974816Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.363161777Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f9152216-2f1d-4207-9edb-c83f88001579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:56.363183321Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031070989Z" level=info msg="NetworkStart: stopping network for sandbox fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031223743Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b4cbaa37-be57-479d-b7ee-a46de5205b88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031245872Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031252422Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031259152Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031672538Z" level=info msg="NetworkStart: stopping network for sandbox 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031776112Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a940b28d-79f3-42ee-890e-fc02dfee97c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031794983Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031801387Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:00:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:57.031807065Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3e995d30\x2db9ce\x2d411d\x2d8dea\x2dfbe369b491dc.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a849a3f5\x2d1047\x2d4153\x2d8f91\x2d482b2048fdd6.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6aa11cbb\x2ddf9a\x2d4bb7\x2db3e9\x2dfb1df3199c0a.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d93cf86\x2de8cf\x2d4637\x2da240\x2d28ab9814d3ea.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dae82b78\x2dbbb4\x2d4dd3\x2daf23\x2d643ce18beb38.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-28d744358fee2c1477bbf33714f65c45c12aba7e08c6016595da8f3a81a3c9e2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-12100a2eab2e8fdc496c26783f2bf7211a57cfbb87ab63c156d2f6797d7687a4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e140a126cefcf8d9c3acfd7bcbfb78d6eac948abbcd7ec415ed9c944dba549f6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-eeba8690940c0454efefba146862d2c88bd6b1441a609818bc76bc2568988149-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f9a93afd03070e4d992a542a356e323822a894911fd52c083577f4bf4fcffdf0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:00:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:00:58.142264193Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:00:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:00:59.996984 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:00:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:00:59.997511 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.023727651Z" level=info msg="NetworkStart: stopping network for sandbox 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024012022Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ef6fd0fe-c2f1-446f-bcd2-1b093ed40782 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024039357Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024046537Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024053815Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024874852Z" level=info msg="NetworkStart: stopping network for sandbox 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.024990241Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/88c266d8-5595-44b7-9f9e-58a2db430f54 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.025012070Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.025020358Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:00.025027528Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:01.022226354Z" level=info msg="NetworkStart: stopping network for sandbox 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:01.022376352Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/b59047f5-c12f-4afd-bc11-ad5cdcb7eebd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:01.022398737Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:01.022405303Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:01.022411385Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.023015234Z" level=info msg="NetworkStart: stopping network for sandbox e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.023191412Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/31e5a0f0-56f0-4044-9e11-337f17250661 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.023220424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.023226971Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.023233724Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.026115667Z" level=info msg="NetworkStart: stopping network for sandbox e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.026237056Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/66b51b1e-1a13-4798-b648-b4d853770b23 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.026263357Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.026270714Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:02.026277111Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:14.996844 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:01:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:14.997598 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:01:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:25.996594 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:01:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:25.997101 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.033470784Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.033527259Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount has successfully entered the 'dead' state. Jan 23 17:01:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount has successfully entered the 'dead' state. Jan 23 17:01:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d8c2a0e\x2da191\x2d4bd1\x2dbb04\x2d16dcafa8e8ba.mount has successfully entered the 'dead' state. Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.073315685Z" level=info msg="runSandbox: deleting pod ID da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62 from idIndex" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.073345592Z" level=info msg="runSandbox: removing pod sandbox da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.073363133Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.073375970Z" level=info msg="runSandbox: unmounting shmPath for sandbox da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.086419920Z" level=info msg="runSandbox: removing pod sandbox from storage: da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.089399227Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:26.089420342Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=09208aff-1540-48cb-a051-a6658d4ada33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:26.089653 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:26.089696 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:26.089728 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:26.089769 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(da2759cd9901b40a254cb34281c97e62654a1d7c0130e95520a73bfce3c06d62): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879568 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879585 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879591 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879597 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879603 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879609 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:27.879614 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:01:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:28.141397548Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.034879066Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.034922117Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount has successfully entered the 'dead' state. Jan 23 17:01:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount has successfully entered the 'dead' state. Jan 23 17:01:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c50bfd9b\x2db3d4\x2d479c\x2d86a1\x2de67e541293ba.mount has successfully entered the 'dead' state. Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.087296864Z" level=info msg="runSandbox: deleting pod ID 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3 from idIndex" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.087341014Z" level=info msg="runSandbox: removing pod sandbox 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.087361016Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.087378196Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.103486140Z" level=info msg="runSandbox: removing pod sandbox from storage: 87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.106935904Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:30.106955086Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=af85dea4-8ba8-4895-aa89-11b306b2dc92 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:30.107199 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:30.107255 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:01:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:30.107277 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:01:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:30.107323 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(87b02c477c2ef5263a9cdd5b614b2fb265968cd8cb125d812704e9ed92b6bda3): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.030366067Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.030415319Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount has successfully entered the 'dead' state. Jan 23 17:01:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount has successfully entered the 'dead' state. Jan 23 17:01:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4be78172\x2d3e61\x2d4370\x2daab3\x2d0103a59115e1.mount has successfully entered the 'dead' state. Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.075308097Z" level=info msg="runSandbox: deleting pod ID 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81 from idIndex" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.075332365Z" level=info msg="runSandbox: removing pod sandbox 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.075349314Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.075360981Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.089458807Z" level=info msg="runSandbox: removing pod sandbox from storage: 6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.093108390Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:33.093345907Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=fc6b00a5-0084-45d8-adc9-4b6c92e25c03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:33.093618 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:33.093673 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:33.093698 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:33.093750 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6737d089c5bff42afd5b4ab15df4e59f982157cc4c1fb58486264446d353ad81): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:01:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:38.995420 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:38.995764554Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:38.995808799Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:39.008160709Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d81fe4cb-bb91-4638-8706-12efb3ef7280 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:39.008185160Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.032063544Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.032105252Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount has successfully entered the 'dead' state. Jan 23 17:01:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount has successfully entered the 'dead' state. Jan 23 17:01:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0fecc762\x2db297\x2d40ae\x2db159\x2d56c26e234f43.mount has successfully entered the 'dead' state. Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.068312298Z" level=info msg="runSandbox: deleting pod ID ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02 from idIndex" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.068342713Z" level=info msg="runSandbox: removing pod sandbox ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.068359988Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.068374779Z" level=info msg="runSandbox: unmounting shmPath for sandbox ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.080470257Z" level=info msg="runSandbox: removing pod sandbox from storage: ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.086869136Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:40.086894550Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=ee418104-c11e-43ea-b458-85ecfd120d9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:40.087147 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:40.087202 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:40.087230 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:40.087279 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ad6cd7942c9975ecbb40e7ae66e3b869f20ee87e630ddcdf3740cb008f581a02): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:40.997002 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:01:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:40.997509 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.031477777Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.031517289Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount has successfully entered the 'dead' state. Jan 23 17:01:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount has successfully entered the 'dead' state. Jan 23 17:01:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9448fa4e\x2dc9e9\x2d45d9\x2d8592\x2df8ad5e512966.mount has successfully entered the 'dead' state. Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.070310556Z" level=info msg="runSandbox: deleting pod ID 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882 from idIndex" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.070334883Z" level=info msg="runSandbox: removing pod sandbox 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.070348100Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.070361162Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.086460675Z" level=info msg="runSandbox: removing pod sandbox from storage: 2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.089966971Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.089984759Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=eef863ef-0084-4ffd-89a5-744ce4374032 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:41.090201 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:41.090244 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:41.090267 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:41.090307 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2aff98e9fe6c3ccd24d35d3fe19c1842cb4dcc19311d152c7ce92479a40de882): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373127287Z" level=info msg="NetworkStart: stopping network for sandbox 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373285036Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/92d3b76a-2710-404d-b0a4-0e60fb005fe3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373313004Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373320010Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373327477Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373745312Z" level=info msg="NetworkStart: stopping network for sandbox 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373893517Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/dc8aaca9-0ab8-48cb-8ac4-bad8b11147ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373917826Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373926217Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.373932993Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374632480Z" level=info msg="NetworkStart: stopping network for sandbox e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374676057Z" level=info msg="NetworkStart: stopping network for sandbox 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374756070Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d316a3b5-f2c0-48bf-b2b8-20ea1b24eeba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374778795Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374785831Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374792379Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374794662Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5ae23710-3db5-45e7-8d56-07482a54d4b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374827567Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374834538Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.374841480Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.375952118Z" level=info msg="NetworkStart: stopping network for sandbox 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.376052422Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f9152216-2f1d-4207-9edb-c83f88001579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.376071554Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.376078476Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.376084039Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:41.995552 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.995861384Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:41.995900871Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.006801525Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/61f18a3f-1c79-4597-acac-46efa773d464 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.006822055Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.042485860Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.042518441Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.042520256Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.042618829Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount has successfully entered the 'dead' state. Jan 23 17:01:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount has successfully entered the 'dead' state. Jan 23 17:01:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount has successfully entered the 'dead' state. Jan 23 17:01:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount has successfully entered the 'dead' state. Jan 23 17:01:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a940b28d\x2d79f3\x2d42ee\x2d890e\x2dfc02dfee97c2.mount has successfully entered the 'dead' state. Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.076309017Z" level=info msg="runSandbox: deleting pod ID 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72 from idIndex" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.076334209Z" level=info msg="runSandbox: removing pod sandbox 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.076348465Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.076359141Z" level=info msg="runSandbox: unmounting shmPath for sandbox 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.080310427Z" level=info msg="runSandbox: deleting pod ID fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb from idIndex" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.080338126Z" level=info msg="runSandbox: removing pod sandbox fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.080355924Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.080370276Z" level=info msg="runSandbox: unmounting shmPath for sandbox fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.088561523Z" level=info msg="runSandbox: removing pod sandbox from storage: 67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.091821101Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.091842125Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=60acabc6-b6ae-4c78-8737-a72ad73949d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.092431 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.092481 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.092506 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.092444395Z" level=info msg="runSandbox: removing pod sandbox from storage: fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.092558 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.095768164Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:42.095787110Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=79ec1d7b-53da-4194-86e0-b518dc3a1701 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.096000 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.096041 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.096066 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:01:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:42.096111 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:01:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b4cbaa37\x2dbe57\x2d479d\x2db7ee\x2da46de5205b88.mount has successfully entered the 'dead' state. Jan 23 17:01:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-67db174db40adf07f7a4efa282cb021c142b96b79eca07d8d1658ca90b587f72-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fe00eb264798fe6f27678b389e9c22520fcf980dfa457251be8b921531f48fbb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.035192543Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.035246339Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.035295322Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.035263664Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ef6fd0fe\x2dc2f1\x2d446f\x2dbcd2\x2d1b093ed40782.mount has successfully entered the 'dead' state. Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.078354215Z" level=info msg="runSandbox: deleting pod ID 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8 from idIndex" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.078389374Z" level=info msg="runSandbox: removing pod sandbox 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.078404868Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.078417733Z" level=info msg="runSandbox: unmounting shmPath for sandbox 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.079372696Z" level=info msg="runSandbox: deleting pod ID 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a from idIndex" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.079403760Z" level=info msg="runSandbox: removing pod sandbox 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.079420754Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.079434808Z" level=info msg="runSandbox: unmounting shmPath for sandbox 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.093433606Z" level=info msg="runSandbox: removing pod sandbox from storage: 43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.096818894Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.096838439Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=de2061f1-6db6-409e-87aa-d2411a04b04f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.097077 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.097248 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.097274 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.097329 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.097468373Z" level=info msg="runSandbox: removing pod sandbox from storage: 168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.101066909Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:45.101087101Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=4f01a58c-8fae-4b39-bd7c-633cbe7a6c91 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.101311 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.101361 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.101384 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:01:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:45.101428 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.032913000Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.032949018Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-88c266d8\x2d5595\x2d44b7\x2d9f9e\x2d58a2db430f54.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-168be31b425d94bcb04e5c734843c413ea9b58f7d7ca23ed4bbd499bb1d4d8a8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-43f391bb6a6dfbe0ced07a3d67ce4df8b2a479b9055cf455ead65df113cbae6a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b59047f5\x2dc12f\x2d4afd\x2dbc11\x2dad5cdcb7eebd.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.073314033Z" level=info msg="runSandbox: deleting pod ID 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27 from idIndex" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.073339184Z" level=info msg="runSandbox: removing pod sandbox 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.073355364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.073368142Z" level=info msg="runSandbox: unmounting shmPath for sandbox 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.085437445Z" level=info msg="runSandbox: removing pod sandbox from storage: 676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.088952863Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:46.088970597Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=fd5f6b7a-c839-4314-b218-c1a501d765e5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:46.089180 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:46.089232 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:01:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:46.089255 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:01:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:46.089312 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(676b8175f7f73e9850c0eaeef4722a3de56f2f80a529711c43f55c2022ad1f27): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.034229150Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.034270590Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.035663358Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.035697554Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-31e5a0f0\x2d56f0\x2d4044\x2d9e11\x2d337f17250661.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-66b51b1e\x2d1a13\x2d4798\x2db648\x2db4d853770b23.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080312854Z" level=info msg="runSandbox: deleting pod ID e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927 from idIndex" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080339685Z" level=info msg="runSandbox: removing pod sandbox e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080353543Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080369833Z" level=info msg="runSandbox: unmounting shmPath for sandbox e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080400293Z" level=info msg="runSandbox: deleting pod ID e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02 from idIndex" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080429794Z" level=info msg="runSandbox: removing pod sandbox e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080454209Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.080469840Z" level=info msg="runSandbox: unmounting shmPath for sandbox e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.092429235Z" level=info msg="runSandbox: removing pod sandbox from storage: e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.095815174Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.095836182Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=fa953f64-9728-44db-a475-7624986336f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.096014 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.096062 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.096085 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.096142 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e703b9a1a9634a045834945fde7067ac11dbd19a2326fd00389edb7201f0fe02): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.100434549Z" level=info msg="runSandbox: removing pod sandbox from storage: e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.103795044Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.103813149Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=b0e41a9a-30ec-4410-882e-01a2010bdb1a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.104009 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.104045 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.104066 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:47.104105 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(e9c89c05e4b3eed0b14de45167534bbb72aa10ada3717decf556a925e286e927): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:01:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:47.996070 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.996452095Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:47.996490717Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:48.011761957Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/8bc13ffe-7db8-4517-986a-51b82238d4b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:48.011787262Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:52.995435 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:52.995587 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:01:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:52.995773699Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:52.996008157Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:52.995938232Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:52.996167135Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.011138394Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/8edfe748-daea-49b4-99c5-58fb1c4203c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.011163890Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.011686785Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/91751e8a-8414-4708-aaa4-811c04c8ac86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.011707628Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:53.995674 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:01:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:53.995800 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.996070253Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.996122003Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.996153164Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:53.996184107Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:54.012078490Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/d44e8964-9a38-4461-b313-f7d653768ff7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:54.012101795Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:54.012984020Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f9121c8d-4fb1-41ab-a507-acf273024571 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:54.013004048Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:54.996450 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:01:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:01:54.996960 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:01:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:57.996928 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:01:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:57.997332412Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:57.997384882Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:58.007848608Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3763e21a-7c38-4240-b061-99901f621ae0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:58.007875742Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:58.145962373Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:01:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:58.995452 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:01:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:58.995955867Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:58.996001661Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.008014979Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4748d0a4-116f-48e4-bdef-9c4da9b2c7f1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.008040362Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:01:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:59.995952 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:01:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:01:59.996122 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.996290179Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.996342356Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.996426925Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:01:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:01:59.996468027Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.011482459Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/d1b8d690-2700-4f72-a2cc-6a22a3117585 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.011506204Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.012201995Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/266c3fca-9be4-4221-a698-0f1acbb0bfc4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.012229848Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:00.996011 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.996427955Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:00.996467823Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:01.008460253Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/11045463-500d-426c-9134-2683ebfce63f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:01.008480608Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:07.997021 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:02:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:07.997506 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:02:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:19.996192 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:02:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:19.996714 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:02:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:24.021816738Z" level=info msg="NetworkStart: stopping network for sandbox 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:24.022122886Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d81fe4cb-bb91-4638-8706-12efb3ef7280 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:24.022149859Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:24.022156769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:24.022164915Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.384500349Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.384543871Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.384802692Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.384841764Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.385182455Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.385227058Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.385663022Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.385694176Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.387538118Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.387566395Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount has successfully entered the 'dead' state. Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443356396Z" level=info msg="runSandbox: deleting pod ID 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683 from idIndex" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443389286Z" level=info msg="runSandbox: removing pod sandbox 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443404310Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443418534Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443356924Z" level=info msg="runSandbox: deleting pod ID 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98 from idIndex" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443481499Z" level=info msg="runSandbox: removing pod sandbox 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443497994Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443513058Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443359597Z" level=info msg="runSandbox: deleting pod ID 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c from idIndex" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443574019Z" level=info msg="runSandbox: removing pod sandbox 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443587848Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.443599546Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.444274175Z" level=info msg="runSandbox: deleting pod ID e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b from idIndex" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.444297573Z" level=info msg="runSandbox: removing pod sandbox e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.444313184Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.444324864Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.447268699Z" level=info msg="runSandbox: deleting pod ID 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241 from idIndex" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.447290608Z" level=info msg="runSandbox: removing pod sandbox 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.447302979Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.447315994Z" level=info msg="runSandbox: unmounting shmPath for sandbox 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.455440370Z" level=info msg="runSandbox: removing pod sandbox from storage: 1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.463436427Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.463454541Z" level=info msg="runSandbox: removing pod sandbox from storage: 3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.463441071Z" level=info msg="runSandbox: removing pod sandbox from storage: 8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.463461021Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a53e95de-2fad-487d-89b3-86542e3011e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.463731 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.463893 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.463919 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.463973 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.464432650Z" level=info msg="runSandbox: removing pod sandbox from storage: e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.466857830Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.466877219Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3035154f-29b6-4db1-ac9e-10ad75f52cff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.467099 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.467146 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.467166 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.467202 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.468431383Z" level=info msg="runSandbox: removing pod sandbox from storage: 77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.469961486Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.469981188Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f54aafed-3395-4181-b6d1-f31f9c947b39 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.470180 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.470220 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.470240 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.470275 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.473191765Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.473214737Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=5fdcf959-250c-4f83-8b1b-91425cdb97bc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.473317 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.473347 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.473368 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.473407 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.476539040Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.476558457Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=0f3ed4a3-f154-41c0-a1b7-f831cee48a7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.476763 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.476795 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.476815 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:26.476854 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:26.500888 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:26.501081 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501082897Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501111252Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:26.501161 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:26.501163 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:02:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:26.501235 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501452104Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501478013Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501527176Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501554616Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501661829Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501686176Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501702884Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.501688953Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.528438110Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/58d00f05-0b93-4be6-8cc0-c806161bcbed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.528460807Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.529146323Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3822823c-91e4-4933-a7d6-f66a8c1b7052 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.529169122Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.530924872Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c8302127-19c4-485f-86fd-3146cafc71b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.530943538Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.531886071Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f544b9bb-3e36-4ce5-bf4d-9b35728879e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.531905915Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.532425043Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c9982e11-35f1-469d-936b-69c010f569c4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:26.532443673Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:27.019605737Z" level=info msg="NetworkStart: stopping network for sandbox 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:27.019751635Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/61f18a3f-1c79-4597-acac-46efa773d464 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:27.019779117Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:27.019786179Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:27.019792228Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f9152216\x2d2f1d\x2d4207\x2d9edb\x2dc83f88001579.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5ae23710\x2d3db5\x2d45e7\x2d8d56\x2d07482a54d4b8.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d316a3b5\x2df2c0\x2d48bf\x2db2b8\x2d20ea1b24eeba.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dc8aaca9\x2d0ab8\x2d48cb\x2d8ac4\x2dbad8b11147ce.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-92d3b76a\x2d2710\x2d404d\x2db0a4\x2d0e60fb005fe3.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3dedb7370983e760d7f3a8cdde4d4caff9feff1424a6d94bc496267670cb5c98-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-77d050000818e3ca2d4a13f134d108306dc8eed209317db88e0fcef259b94241-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1be09611e519f3fd652183ddb20fcdfadb91079ae235df9c2f04d7bb3fa1610c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5470fb8ebcc9b504182bcf93e23c309b8526bbdb43d1bf31307c82c5847d90b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8fbc44a161074e8aa87f2c8ef317e814819adfb34560b5a8fb7d5a029d968683-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880386 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880404 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880410 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880416 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880423 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880429 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:27.880437 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:02:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:28.142064463Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:02:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:32.996562 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:02:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:32.997077 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:02:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:33.025778522Z" level=info msg="NetworkStart: stopping network for sandbox 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:33.026108358Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/8bc13ffe-7db8-4517-986a-51b82238d4b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:33.026135842Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:33.026143412Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:33.026150694Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.024563254Z" level=info msg="NetworkStart: stopping network for sandbox d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.024757800Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/8edfe748-daea-49b4-99c5-58fb1c4203c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.024786863Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.024793721Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.024800525Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.025065263Z" level=info msg="NetworkStart: stopping network for sandbox 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.025279995Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/91751e8a-8414-4708-aaa4-811c04c8ac86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.025313656Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.025324009Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:38.025333465Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025416101Z" level=info msg="NetworkStart: stopping network for sandbox 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025557316Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f9121c8d-4fb1-41ab-a507-acf273024571 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025579761Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025585931Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025591869Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.025885805Z" level=info msg="NetworkStart: stopping network for sandbox 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.026027020Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/d44e8964-9a38-4461-b313-f7d653768ff7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.026052814Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.026060104Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:39.026067411Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.019806743Z" level=info msg="NetworkStart: stopping network for sandbox 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.020006845Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3763e21a-7c38-4240-b061-99901f621ae0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.020032986Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.020040395Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.020048899Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:43.997066 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.997845706Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=6ed5b82f-8099-4c11-abb1-096f4f1e133a name=/runtime.v1.ImageService/ImageStatus Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.998003118Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6ed5b82f-8099-4c11-abb1-096f4f1e133a name=/runtime.v1.ImageService/ImageStatus Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.998439413Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=c66f307b-6ff5-4904-a2dc-1b72419a24c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.998540464Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c66f307b-6ff5-4904-a2dc-1b72419a24c7 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.999330136Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=bac53cb6-9f1b-4e70-95fc-b97d473da3d1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:02:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:43.999412457Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:02:44 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope. -- Subject: Unit crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.021565043Z" level=info msg="NetworkStart: stopping network for sandbox dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.021715403Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4748d0a4-116f-48e4-bdef-9c4da9b2c7f1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.021742352Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.021749800Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.021757140Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:44 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685. -- Subject: Unit crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.111998796Z" level=info msg="Created container 6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=bac53cb6-9f1b-4e70-95fc-b97d473da3d1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.112455364Z" level=info msg="Starting container: 6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" id=bbed5521-de4d-4dd9-925f-d02dc6cc91de name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.119031229Z" level=info msg="Started container" PID=99777 containerID=6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=bbed5521-de4d-4dd9-925f-d02dc6cc91de name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.123924404Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.134057750Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.134082222Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.134097159Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.143463762Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.143483329Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.143495547Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.152151591Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.152168098Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.152177380Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.160627120Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.160645029Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.160654837Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.169262559Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:02:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:44.169281014Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:02:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:44.537719 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/187.log" Jan 23 17:02:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:44.538741 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685} Jan 23 17:02:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:44.538889 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.024191974Z" level=info msg="NetworkStart: stopping network for sandbox 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.024418598Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/d1b8d690-2700-4f72-a2cc-6a22a3117585 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.024447026Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.024454838Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.024462140Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.025608960Z" level=info msg="NetworkStart: stopping network for sandbox 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.025727836Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/266c3fca-9be4-4221-a698-0f1acbb0bfc4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.025750072Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.025756892Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:45.025762850Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.020962757Z" level=info msg="NetworkStart: stopping network for sandbox f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.021151430Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/11045463-500d-426c-9134-2683ebfce63f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.021175971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.021183322Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.021190353Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:02:46 hub-master-0.workload.bos2.lab conmon[99746]: conmon 6733411a4ba236496f97 : container 99777 exited with status 1 Jan 23 17:02:46 hub-master-0.workload.bos2.lab systemd[1]: crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has successfully entered the 'dead' state. Jan 23 17:02:46 hub-master-0.workload.bos2.lab systemd[1]: crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope: Consumed 574ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope completed and consumed the indicated resources. Jan 23 17:02:46 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope has successfully entered the 'dead' state. Jan 23 17:02:46 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope: Consumed 53ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685.scope completed and consumed the indicated resources. Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.543950 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/188.log" Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.544402 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/187.log" Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.545241 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" exitCode=1 Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.545264 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685} Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.545286 8631 scope.go:115] "RemoveContainer" containerID="d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.546081181Z" level=info msg="Removing container: d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4" id=9ac0f81f-f4ce-4f3d-a575-cbd94dbc5791 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:46.546197 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:02:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:02:46.546717 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:02:46 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-fbbba4518a72cf078d3d80f402290c2ad6fdb69da6773727fdee572c597f6c87-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-fbbba4518a72cf078d3d80f402290c2ad6fdb69da6773727fdee572c597f6c87-merged.mount has successfully entered the 'dead' state. Jan 23 17:02:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:46.586782907Z" level=info msg="Removed container d7dcfbc532e91c4b1ab802d0fadeb7b5e999348ffe61dd88afe3637e1efe60e4: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9ac0f81f-f4ce-4f3d-a575-cbd94dbc5791 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:02:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:02:47.548811 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/188.log" Jan 23 17:02:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:02:58.144767777Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:03:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:01.997110 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:03:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:01.997759 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.033345690Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.033404046Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount has successfully entered the 'dead' state. Jan 23 17:03:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount has successfully entered the 'dead' state. Jan 23 17:03:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d81fe4cb\x2dbb91\x2d4638\x2d8706\x2d12efb3ef7280.mount has successfully entered the 'dead' state. Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.081355181Z" level=info msg="runSandbox: deleting pod ID 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0 from idIndex" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.081389794Z" level=info msg="runSandbox: removing pod sandbox 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.081407974Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.081422271Z" level=info msg="runSandbox: unmounting shmPath for sandbox 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.093428679Z" level=info msg="runSandbox: removing pod sandbox from storage: 57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.097310515Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:09.097330380Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=fae8f9d6-4c75-4be0-a80a-452b8f9ad47b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:09.097599 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:09.097652 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:09.097675 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:09.097725 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(57705279334d2d2b650932045dc9512791f9e9fd0119da052b2cccaa4b2c07f0): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542266127Z" level=info msg="NetworkStart: stopping network for sandbox 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542276946Z" level=info msg="NetworkStart: stopping network for sandbox f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542417476Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/58d00f05-0b93-4be6-8cc0-c806161bcbed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542445041Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542453583Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542461440Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542523925Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3822823c-91e4-4933-a7d6-f66a8c1b7052 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542552060Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542561504Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.542570104Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.544138945Z" level=info msg="NetworkStart: stopping network for sandbox 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.544282170Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/c8302127-19c4-485f-86fd-3146cafc71b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.544314189Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.544323356Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.544330834Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.545812154Z" level=info msg="NetworkStart: stopping network for sandbox 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.545930004Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f544b9bb-3e36-4ce5-bf4d-9b35728879e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.545952818Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.545959883Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.545966841Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.546634654Z" level=info msg="NetworkStart: stopping network for sandbox 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.546736654Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c9982e11-35f1-469d-936b-69c010f569c4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.546757825Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.546764054Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:03:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:11.546769801Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.030427949Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.030461317Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount has successfully entered the 'dead' state. Jan 23 17:03:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount has successfully entered the 'dead' state. Jan 23 17:03:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-61f18a3f\x2d1c79\x2d4597\x2dacac\x2d46efa773d464.mount has successfully entered the 'dead' state. Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.075320827Z" level=info msg="runSandbox: deleting pod ID 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e from idIndex" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.075347785Z" level=info msg="runSandbox: removing pod sandbox 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.075362502Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.075374639Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.087451229Z" level=info msg="runSandbox: removing pod sandbox from storage: 1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.090909147Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:12.090927432Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f02b8a2d-8b28-42da-88e0-17e1e6ed7c35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:12.091058 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:12.091105 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:12.091127 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:12.091171 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1952717b6beae083b9540732365641e6b6fe140508b2e88152cc26ab8354449e): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:12.996493 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:03:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:12.997009 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.037631177Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.037669964Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount has successfully entered the 'dead' state. Jan 23 17:03:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount has successfully entered the 'dead' state. Jan 23 17:03:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8bc13ffe\x2d7db8\x2d4517\x2d986a\x2d51b82238d4b6.mount has successfully entered the 'dead' state. Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.084326580Z" level=info msg="runSandbox: deleting pod ID 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559 from idIndex" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.084352010Z" level=info msg="runSandbox: removing pod sandbox 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.084365972Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.084388241Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.097436594Z" level=info msg="runSandbox: removing pod sandbox from storage: 9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.100753166Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:18.100771673Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=6b008935-0023-42fb-b0c7-2fe8cf2fae9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:18.100987 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:18.101039 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:18.101063 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:18.101117 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9eb727503a7aeebf1b9e18884863378b332fd49bc34f24a2d334daa533c4f559): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:03:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:22.995859 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:22.996147045Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:22.996195330Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.008109938Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a54a775c-3e1a-46b3-be51-c3f39f552e7c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.008136643Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.035732221Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.035762699Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.035919549Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.035957680Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount has successfully entered the 'dead' state. Jan 23 17:03:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount has successfully entered the 'dead' state. Jan 23 17:03:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount has successfully entered the 'dead' state. Jan 23 17:03:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount has successfully entered the 'dead' state. Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.085388278Z" level=info msg="runSandbox: deleting pod ID 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5 from idIndex" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.085417489Z" level=info msg="runSandbox: removing pod sandbox 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.085430598Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.085442651Z" level=info msg="runSandbox: unmounting shmPath for sandbox 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.086358331Z" level=info msg="runSandbox: deleting pod ID d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8 from idIndex" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.086389659Z" level=info msg="runSandbox: removing pod sandbox d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.086404846Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.086419585Z" level=info msg="runSandbox: unmounting shmPath for sandbox d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.097443242Z" level=info msg="runSandbox: removing pod sandbox from storage: 75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.100066890Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.100085558Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=4dd5a7e1-0f91-4e9f-9181-1bcb49e2bace name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.100350 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.100396 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.100419 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.100466 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.101481907Z" level=info msg="runSandbox: removing pod sandbox from storage: d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.105076776Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:23.105097623Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=88d77ec1-041c-4b06-b33b-40045c852f27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.105247 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.105286 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.105309 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:03:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:23.105356 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-91751e8a\x2d8414\x2d4708\x2daaa4\x2d811c04c8ac86.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8edfe748\x2ddaea\x2d49b4\x2d99c5\x2d58fb1c4203c9.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-75ebeaafc93c2f1c653ff72108f2107f1fb6fb81f81c498b7788e4520a10e2c5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d7cee2c770bc33c2b7a123834cd36ef9f5ea6ece3c3e686e02f6b1e2d142e8b8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.036177044Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.036223444Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.037597921Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.037631603Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount has successfully entered the 'dead' state. Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081338854Z" level=info msg="runSandbox: deleting pod ID 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd from idIndex" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081367759Z" level=info msg="runSandbox: removing pod sandbox 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081381979Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081393772Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081344655Z" level=info msg="runSandbox: deleting pod ID 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2 from idIndex" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081466689Z" level=info msg="runSandbox: removing pod sandbox 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081481966Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.081498101Z" level=info msg="runSandbox: unmounting shmPath for sandbox 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.105434896Z" level=info msg="runSandbox: removing pod sandbox from storage: 223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.105463210Z" level=info msg="runSandbox: removing pod sandbox from storage: 8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.109096943Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.109143056Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f7909a8b-1c68-4c78-9512-09f1ee2ca011 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.109416 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.109462 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.109485 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.109533 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.112219430Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:24.112238360Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=ccac8e23-e990-4c39-9e26-ad2d8e765dd7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.112422 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.112457 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.112480 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:03:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:24.112526 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:03:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f9121c8d\x2d4fb1\x2d41ab\x2da507\x2dacf273024571.mount has successfully entered the 'dead' state. Jan 23 17:03:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d44e8964\x2d9a38\x2d4461\x2db313\x2df7d653768ff7.mount has successfully entered the 'dead' state. Jan 23 17:03:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8b634f34cc47e5fe17153100d01c4d5b321064ab6dba8eba2b129b182042cedd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-223bf7db2b33cd200372d70ba549a6eb7c29f2182a2a370a4322cf3779cdbda2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:26.996332 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:03:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:26.996779022Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:26.996833024Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:26.997258 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:03:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:26.997772 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:03:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:27.011905834Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1e1a2122-d96c-4dfc-a671-5fe41db4923a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:27.011939727Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881316 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881337 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881344 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881352 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881357 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881364 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:27.881369 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.031269326Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.031310972Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount has successfully entered the 'dead' state. Jan 23 17:03:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount has successfully entered the 'dead' state. Jan 23 17:03:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3763e21a\x2d7c38\x2d4240\x2db061\x2d99901f621ae0.mount has successfully entered the 'dead' state. Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.077313472Z" level=info msg="runSandbox: deleting pod ID 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b from idIndex" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.077341455Z" level=info msg="runSandbox: removing pod sandbox 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.077356845Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.077370750Z" level=info msg="runSandbox: unmounting shmPath for sandbox 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.096444817Z" level=info msg="runSandbox: removing pod sandbox from storage: 034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.099741660Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.099761123Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=87f34580-3cef-4c04-a525-bade5cd7430a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:28.099957 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:28.100001 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:28.100026 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:28.100075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(034a4ebc04d4c770b2ede25f4b2ecc7e3c8bbac67c688253ca9a8249ddc4871b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:03:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:28.142040869Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.033741938Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.033785580Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount has successfully entered the 'dead' state. Jan 23 17:03:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount has successfully entered the 'dead' state. Jan 23 17:03:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4748d0a4\x2d116f\x2d48e4\x2dbdef\x2d9c4da9b2c7f1.mount has successfully entered the 'dead' state. Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.071322771Z" level=info msg="runSandbox: deleting pod ID dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76 from idIndex" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.071349148Z" level=info msg="runSandbox: removing pod sandbox dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.071370593Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.071385963Z" level=info msg="runSandbox: unmounting shmPath for sandbox dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.083435792Z" level=info msg="runSandbox: removing pod sandbox from storage: dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.087031541Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:29.087050545Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=f6bc4cf2-4d70-4e39-96bf-5778fc269354 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:29.087274 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:29.087444 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:29.087470 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:29.087524 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(dba847b2f9bcd9baf4fce6ae6b28924ce8c33f1db7254af40218c5a917006f76): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.036043281Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.036080096Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.036121648Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.036088616Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-266c3fca\x2d9be4\x2d4221\x2da698\x2d0f1acbb0bfc4.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d1b8d690\x2d2700\x2d4f72\x2da2cc\x2d6a22a3117585.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079323135Z" level=info msg="runSandbox: deleting pod ID 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff from idIndex" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079353516Z" level=info msg="runSandbox: removing pod sandbox 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079369058Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079382057Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079325239Z" level=info msg="runSandbox: deleting pod ID 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8 from idIndex" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079434035Z" level=info msg="runSandbox: removing pod sandbox 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079446926Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.079459453Z" level=info msg="runSandbox: unmounting shmPath for sandbox 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.091474585Z" level=info msg="runSandbox: removing pod sandbox from storage: 4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.091483966Z" level=info msg="runSandbox: removing pod sandbox from storage: 815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.094767652Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.094786059Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=f47ff26a-d4f7-4a4a-b177-aa6a77648c77 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.095009 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.095050 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.095071 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.095117 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4be0c5ea8cf690e1b8842f214d5e4d9e369deb30976c0a9ee0c83ddaea8040ff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.097773779Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.097791954Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1abe19fb-47f2-42a2-9b30-591b86cbb3ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.097979 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.098024 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.098051 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:30.098106 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(815b96b67261a659e6b4681c47e314781008bfd5cab87b96363b4e4cb534a1b8): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:03:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:30.995534 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.995890474Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:30.995940413Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.007082326Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/de3aeb81-ae62-425a-878f-ac8eb5e6f5a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.007104408Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.032047837Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.032079048Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount has successfully entered the 'dead' state. Jan 23 17:03:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount has successfully entered the 'dead' state. Jan 23 17:03:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-11045463\x2d500d\x2d426c\x2d9134\x2d2683ebfce63f.mount has successfully entered the 'dead' state. Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.083284220Z" level=info msg="runSandbox: deleting pod ID f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb from idIndex" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.083310509Z" level=info msg="runSandbox: removing pod sandbox f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.083327782Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.083341415Z" level=info msg="runSandbox: unmounting shmPath for sandbox f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.103412148Z" level=info msg="runSandbox: removing pod sandbox from storage: f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.106321358Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:31.106339635Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=4557e4e6-6941-4310-a1a4-823c70350f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:31.106572 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:31.106616 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:31.106639 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:31.106690 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(f1b1322f80c86c8e9eee99589479a6787b9097816968e0e9e8729dab0bb8c6cb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:03:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:33.995979 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:03:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:33.996388005Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:33.996599848Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:34.007557286Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/f547bcd5-e791-4320-9f9e-9cb763e95f81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:34.007578123Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:35.996055 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:35.996438616Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:35.996491811Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:36.007290936Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/c902861b-7d60-4ef7-b0c4-dd0767711183 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:36.007315863Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:36.996389 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:03:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:36.996735826Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:36.996777621Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:37.012492745Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/07cbce57-cd4a-47d3-baa6-d95789d5f660 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:37.012523083Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:37.997318 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:03:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:37.997630422Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:37.997662878Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:38.007892828Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/0294e18f-f366-43b3-96ea-52dd9330a447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:38.007912406Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:38.996076 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:03:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:38.996586 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:03:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:40.995752 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:03:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:40.996199245Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:40.996251584Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:41.006903261Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e5bcc5bd-0343-4c44-9c9e-f8a11cf5074f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:41.006923131Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:41.996246 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:03:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:41.996536861Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:41.996588670Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:42.007491316Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/73aca002-2a6d-4bfa-8c84-0ea5030b948c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:42.007517801Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:43.996069 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:03:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:43.996160 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:43.996391193Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:43.996420606Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:43.996443381Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:43.996448869Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:44.010076385Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/ed78c465-2d44-43b6-b3e2-b04718c04f2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:44.010095363Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:44.011778893Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/48329b72-7494-43bd-950c-c0505ae6cca4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:44.011797618Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:46.996449 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:03:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:46.996782049Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:46.996830644Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:47.007456021Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b8ef9bdc-525b-43ea-b676-3acbabd92f44 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:47.007474486Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:50.996540 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:03:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:50.997050 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.553745031Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.553819149Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.553848918Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.553901474Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.555690374Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.555728852Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.556308366Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.556350291Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.557101009Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.557130887Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c8302127\x2d19c4\x2d485f\x2d86fd\x2d3146cafc71b2.mount has successfully entered the 'dead' state. Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603297401Z" level=info msg="runSandbox: deleting pod ID 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c from idIndex" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603325411Z" level=info msg="runSandbox: removing pod sandbox 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603342193Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603355418Z" level=info msg="runSandbox: unmounting shmPath for sandbox 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603304277Z" level=info msg="runSandbox: deleting pod ID 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419 from idIndex" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603407673Z" level=info msg="runSandbox: removing pod sandbox 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603421022Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603432536Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603386329Z" level=info msg="runSandbox: deleting pod ID f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62 from idIndex" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603494265Z" level=info msg="runSandbox: removing pod sandbox f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603510474Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603525454Z" level=info msg="runSandbox: unmounting shmPath for sandbox f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603304246Z" level=info msg="runSandbox: deleting pod ID 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7 from idIndex" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603588154Z" level=info msg="runSandbox: removing pod sandbox 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603602062Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.603614426Z" level=info msg="runSandbox: unmounting shmPath for sandbox 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.608324600Z" level=info msg="runSandbox: deleting pod ID 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983 from idIndex" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.608347924Z" level=info msg="runSandbox: removing pod sandbox 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.608360034Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.608371460Z" level=info msg="runSandbox: unmounting shmPath for sandbox 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.611470908Z" level=info msg="runSandbox: removing pod sandbox from storage: f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.614768243Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.614789895Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2d303ef4-76ef-46b9-bbb9-bd98d9ac2e29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.615036 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.615088 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.615111 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.615160 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.615458592Z" level=info msg="runSandbox: removing pod sandbox from storage: 49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.615484498Z" level=info msg="runSandbox: removing pod sandbox from storage: 4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.616456012Z" level=info msg="runSandbox: removing pod sandbox from storage: 640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.618740630Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.618759248Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9d8abcc3-4ce7-4528-ac3a-d7dc66a10fbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.618967 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.619014 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.619037 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.619086 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.619451814Z" level=info msg="runSandbox: removing pod sandbox from storage: 95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.622077371Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.622094985Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ae5e89bb-191e-4915-bf0f-6816cefad000 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.622318 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.622352 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.622372 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.622407 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.625611475Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.625631564Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c4039156-7929-4935-bcaf-e4b394d6df00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.625851 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.625884 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.625907 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.625943 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.628830210Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.628850373Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4463f78c-7a75-4851-bfa9-fbcad8f017c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.629048 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.629088 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.629109 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:03:56.629147 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:56.679909 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:56.680111 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:56.680170 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680196322Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680237505Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:56.680218 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:03:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:03:56.680357 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680516963Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680547946Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680555029Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680582588Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680587643Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680528457Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680648399Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.680622583Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708371406Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/e020089a-7cdb-4d8e-a744-e8700d8cb417 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708399542Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708616086Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5022b15c-3f74-432e-a144-8f15f2b9ff82 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708642532Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708623578Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c460a6fa-d7a4-479a-8988-8cd767b173c4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.708733553Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.714141875Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6d5a166c-f082-4a78-b446-6e10729c7d0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.714163744Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.715014259Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fc5b7405-0415-49c3-8da1-bdedde06254c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:03:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:56.715035397Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c9982e11\x2d35f1\x2d469d\x2d936b\x2d69c010f569c4.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f544b9bb\x2d3e36\x2d4ce5\x2dbf4d\x2d9b35728879e0.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3822823c\x2d91e4\x2d4933\x2da7d6\x2df66a8c1b7052.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-58d00f05\x2d0b93\x2d4be6\x2d8cc0\x2dc806161bcbed.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f68dd08090662f57010dc2e4d0be29f400d0bdd0465818090a5c27e26f786c62-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4d5434e7ab00bbee75cf4e621792e2960f7254c7107782fe4f3b1c28498e1419-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-640be2d2010c941212ad789464ab7e970bc207972fc11f12b7aa3f30e8d8aa8c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-49b944d1d6db316c1550999a49a82526a486e09ee86a2c086db97f354eb9f1d7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-95206cf85deb5a8816527c476ffda2845b486c64274579f2a200b47a70a30983-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:03:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:03:58.143370127Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:04:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:01.997167 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:04:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:01.997811 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:04:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:08.020914917Z" level=info msg="NetworkStart: stopping network for sandbox 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:08.021059035Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a54a775c-3e1a-46b3-be51-c3f39f552e7c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:08.021083993Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:08.021090694Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:08.021097825Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1364] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1370] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1371] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1373] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1378] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:04:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493448.1383] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:04:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493450.0354] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:04:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:12.026578784Z" level=info msg="NetworkStart: stopping network for sandbox fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:12.026941942Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1e1a2122-d96c-4dfc-a671-5fe41db4923a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:12.026968176Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:12.026975014Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:12.026981608Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:16.019634434Z" level=info msg="NetworkStart: stopping network for sandbox 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:16.019863586Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/de3aeb81-ae62-425a-878f-ac8eb5e6f5a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:16.019888213Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:16.019895303Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:16.019901790Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:16.996691 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:04:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:16.997223 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:04:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:19.020838497Z" level=info msg="NetworkStart: stopping network for sandbox 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:19.021037507Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/f547bcd5-e791-4320-9f9e-9cb763e95f81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:19.021063906Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:19.021070176Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:19.021077004Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:21.019310165Z" level=info msg="NetworkStart: stopping network for sandbox 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:21.019457637Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/c902861b-7d60-4ef7-b0c4-dd0767711183 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:21.019479747Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:21.019486189Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:21.019493562Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:22.026249409Z" level=info msg="NetworkStart: stopping network for sandbox 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:22.026394759Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/07cbce57-cd4a-47d3-baa6-d95789d5f660 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:22.026415674Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:22.026422238Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:22.026428790Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:23.021010016Z" level=info msg="NetworkStart: stopping network for sandbox 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:23.021151649Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/0294e18f-f366-43b3-96ea-52dd9330a447 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:23.021176924Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:23.021184684Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:23.021191101Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:26.020799397Z" level=info msg="NetworkStart: stopping network for sandbox 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:26.020954750Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e5bcc5bd-0343-4c44-9c9e-f8a11cf5074f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:26.020978675Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:26.020985277Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:26.020991353Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:27.019900742Z" level=info msg="NetworkStart: stopping network for sandbox 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:27.020086178Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/73aca002-2a6d-4bfa-8c84-0ea5030b948c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:27.020112200Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:27.020119138Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:27.020125330Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882162 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882302 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882309 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882315 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882321 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882327 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:27.882332 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:04:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:28.142014341Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.023896041Z" level=info msg="NetworkStart: stopping network for sandbox 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024059158Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/ed78c465-2d44-43b6-b3e2-b04718c04f2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024062341Z" level=info msg="NetworkStart: stopping network for sandbox 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024086154Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024194818Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024203723Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024236170Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/48329b72-7494-43bd-950c-c0505ae6cca4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024261594Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024270602Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:29.024277062Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:30.996995 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:04:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:30.997550 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:04:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:32.019625761Z" level=info msg="NetworkStart: stopping network for sandbox efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:32.019771689Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b8ef9bdc-525b-43ea-b676-3acbabd92f44 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:32.019794346Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:32.019800490Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:32.019806703Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722566826Z" level=info msg="NetworkStart: stopping network for sandbox b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722715454Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c460a6fa-d7a4-479a-8988-8cd767b173c4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722740597Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722747507Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722754544Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.722935993Z" level=info msg="NetworkStart: stopping network for sandbox 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723050994Z" level=info msg="NetworkStart: stopping network for sandbox 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723087770Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/e020089a-7cdb-4d8e-a744-e8700d8cb417 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723118765Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723128528Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723138792Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723172439Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5022b15c-3f74-432e-a144-8f15f2b9ff82 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723200229Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723220360Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.723231317Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727710552Z" level=info msg="NetworkStart: stopping network for sandbox fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727831915Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fc5b7405-0415-49c3-8da1-bdedde06254c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727854466Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727861129Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727867044Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.727954403Z" level=info msg="NetworkStart: stopping network for sandbox 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.728098817Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6d5a166c-f082-4a78-b446-6e10729c7d0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.728123193Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.728130999Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:04:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:41.728139082Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:04:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:41.996555 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:04:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:41.997079 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.032361016Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.032399101Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount has successfully entered the 'dead' state. Jan 23 17:04:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount has successfully entered the 'dead' state. Jan 23 17:04:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a54a775c\x2d3e1a\x2d46b3\x2dbe51\x2dc3f39f552e7c.mount has successfully entered the 'dead' state. Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.069343421Z" level=info msg="runSandbox: deleting pod ID 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2 from idIndex" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.069377848Z" level=info msg="runSandbox: removing pod sandbox 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.069397157Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.069412840Z" level=info msg="runSandbox: unmounting shmPath for sandbox 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.085423144Z" level=info msg="runSandbox: removing pod sandbox from storage: 946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.093341810Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:53.093371475Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e4774559-af16-408f-8862-f883d9f5e6dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:53.093817 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:04:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:53.093865 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:04:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:53.093891 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:04:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:53.093940 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(946f0d7502820b9699af5d5c62edec7626dba832ce2a1b937866cc35ce1037a2): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:04:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:04:54.996295 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:04:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:54.996947 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.038438049Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.038479689Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount has successfully entered the 'dead' state. Jan 23 17:04:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount has successfully entered the 'dead' state. Jan 23 17:04:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1e1a2122\x2dd96c\x2d4dfc\x2da671\x2d5fe41db4923a.mount has successfully entered the 'dead' state. Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.074357358Z" level=info msg="runSandbox: deleting pod ID fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235 from idIndex" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.074383007Z" level=info msg="runSandbox: removing pod sandbox fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.074398693Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.074410787Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.090450421Z" level=info msg="runSandbox: removing pod sandbox from storage: fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.093903229Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:57.093921029Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=65e2c4cc-741a-4863-bfd4-538433085bfc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:04:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:57.094146 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:04:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:57.094199 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:04:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:57.094227 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:04:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:04:57.094276 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fb1b0dfdbd021d8aa1c3a5c51faafb5cd567255438ba448ae2ca8cc97d089235): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:04:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:04:58.141285403Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.030893868Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.030937414Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount has successfully entered the 'dead' state. Jan 23 17:05:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount has successfully entered the 'dead' state. Jan 23 17:05:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de3aeb81\x2dae62\x2d425a\x2d878f\x2dac8eb5e6f5a8.mount has successfully entered the 'dead' state. Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.065310032Z" level=info msg="runSandbox: deleting pod ID 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1 from idIndex" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.065337091Z" level=info msg="runSandbox: removing pod sandbox 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.065350653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.065363733Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.076442263Z" level=info msg="runSandbox: removing pod sandbox from storage: 4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.079725879Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:01.079744229Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=d0eddb68-6d16-4598-bee3-7d7829a8afbc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:01.079872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:01.079931 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:01.079955 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:01.080005 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(4e20ddced703b52d60f6e192381c25e1c5b65fe4d18f1c7ab9908bd229ce56a1): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.032682182Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.032725225Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount has successfully entered the 'dead' state. Jan 23 17:05:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount has successfully entered the 'dead' state. Jan 23 17:05:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f547bcd5\x2de791\x2d4320\x2d9f9e\x2d9cb763e95f81.mount has successfully entered the 'dead' state. Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.070310883Z" level=info msg="runSandbox: deleting pod ID 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e from idIndex" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.070337340Z" level=info msg="runSandbox: removing pod sandbox 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.070357107Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.070374131Z" level=info msg="runSandbox: unmounting shmPath for sandbox 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.086462436Z" level=info msg="runSandbox: removing pod sandbox from storage: 962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.089837552Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.089855904Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=de7fbf75-563e-4b40-b648-5da455bb8111 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:04.090038 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:04.090076 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:05:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:04.090099 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:05:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:04.090143 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(962627826cde8ce14aa23ad28c3c0cfd5eb088dcc254d4cc64c753934c23a02e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:05:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:04.996028 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.996316596Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:04.996350557Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:05.008027129Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e3e6874d-afc1-4ff3-a1e0-5fec8a4b6830 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:05.008050166Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.029847584Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.030044330Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount has successfully entered the 'dead' state. Jan 23 17:05:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount has successfully entered the 'dead' state. Jan 23 17:05:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c902861b\x2d7d60\x2d4ef7\x2db0c4\x2ddd0767711183.mount has successfully entered the 'dead' state. Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.066312368Z" level=info msg="runSandbox: deleting pod ID 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540 from idIndex" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.066335508Z" level=info msg="runSandbox: removing pod sandbox 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.066350157Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.066363325Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.080464583Z" level=info msg="runSandbox: removing pod sandbox from storage: 7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.086796009Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:06.086820747Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d70aa903-87ec-4d39-9849-b92b7c82bff8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:06.087004 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:06.087047 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:06.087071 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:06.087115 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(7dc226dadf2064a96a22da8d166f792f85fbd05ece79de1ba63e933dac474540): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.037461337Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.037494666Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount has successfully entered the 'dead' state. Jan 23 17:05:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount has successfully entered the 'dead' state. Jan 23 17:05:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-07cbce57\x2dcd4a\x2d47d3\x2dbaa6\x2dd95789d5f660.mount has successfully entered the 'dead' state. Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.072301583Z" level=info msg="runSandbox: deleting pod ID 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c from idIndex" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.072324273Z" level=info msg="runSandbox: removing pod sandbox 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.072337330Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.072355178Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.085427678Z" level=info msg="runSandbox: removing pod sandbox from storage: 41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.088614081Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:07.088631553Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6ba0ea3b-1487-448a-a9bb-2fa2c7c8a7fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:07.088869 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:07.088921 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:05:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:07.088946 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:05:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:07.089001 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(41da11805b2d1bf9ef49dc2ccdf97805b71da53146ed8fdff5f781ffc86af74c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.031148349Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.031179511Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount has successfully entered the 'dead' state. Jan 23 17:05:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount has successfully entered the 'dead' state. Jan 23 17:05:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0294e18f\x2df366\x2d43b3\x2d96ea\x2d52dd9330a447.mount has successfully entered the 'dead' state. Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.074307683Z" level=info msg="runSandbox: deleting pod ID 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd from idIndex" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.074332063Z" level=info msg="runSandbox: removing pod sandbox 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.074344670Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.074356333Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.087412141Z" level=info msg="runSandbox: removing pod sandbox from storage: 3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.090736818Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:08.090755762Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3616d080-bc9a-4804-9828-8b3acb709a84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:08.090962 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:08.091007 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:08.091029 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:08.091075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3f9664223f63bf1b9f8186d2ec105afad35ab44adbb4d2ecaa1d0ec6e03585cd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:08.996039 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:05:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:08.996542 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:05:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:10.996346 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:05:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:10.996652423Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:10.996690197Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.007994812Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7e284892-ec07-449c-9d9b-bba0bab38d2c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.008014157Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.032577391Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.032610811Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount has successfully entered the 'dead' state. Jan 23 17:05:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount has successfully entered the 'dead' state. Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.071310484Z" level=info msg="runSandbox: deleting pod ID 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581 from idIndex" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.071333493Z" level=info msg="runSandbox: removing pod sandbox 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.071346250Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.071365171Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.087436590Z" level=info msg="runSandbox: removing pod sandbox from storage: 5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.090004747Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.090022375Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=85e0a50c-ce11-4965-8067-8e61cc7df8ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:11.090255 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:11.090290 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:11.090313 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:11.090358 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:05:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e5bcc5bd\x2d0343\x2d4c44\x2d9c9e\x2df8a11cf5074f.mount has successfully entered the 'dead' state. Jan 23 17:05:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5769f1d3208948ea7635ea8368b2ba72c8088d1545c87bb30fd548b05b367581-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:11.996011 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.996370321Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:11.996408768Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.007254023Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bfe4d983-4180-4e53-b33c-d7a51e7050be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.007273053Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.031875320Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.031903045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount has successfully entered the 'dead' state. Jan 23 17:05:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount has successfully entered the 'dead' state. Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.067304993Z" level=info msg="runSandbox: deleting pod ID 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de from idIndex" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.067331564Z" level=info msg="runSandbox: removing pod sandbox 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.067349745Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.067366059Z" level=info msg="runSandbox: unmounting shmPath for sandbox 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.087429981Z" level=info msg="runSandbox: removing pod sandbox from storage: 48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.090152512Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:12.090171343Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=24346688-7e0d-43be-8539-8757749fcdb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:12.090407 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:12.090450 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:05:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:12.090476 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:05:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:12.090523 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:05:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-73aca002\x2d2a6d\x2d4bfa\x2d8c84\x2d0ea5030b948c.mount has successfully entered the 'dead' state. Jan 23 17:05:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-48ca01045ab6113e927f16f081e90f67b69bac7c794d65cc0892de365528f1de-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.035157082Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.035194895Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.035552689Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.035590262Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.082292043Z" level=info msg="runSandbox: deleting pod ID 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e from idIndex" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.082323332Z" level=info msg="runSandbox: removing pod sandbox 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.082339037Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.082352671Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.089311472Z" level=info msg="runSandbox: deleting pod ID 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782 from idIndex" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.089338097Z" level=info msg="runSandbox: removing pod sandbox 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.089353381Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.089368577Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.097438066Z" level=info msg="runSandbox: removing pod sandbox from storage: 6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.100719326Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.100738362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=aec7b435-646f-481e-9536-a7cbb7749008 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.100983 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.101060 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.101092 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.101148 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.105443402Z" level=info msg="runSandbox: removing pod sandbox from storage: 5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.108693845Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:14.108712239Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=dc0934e5-b41a-4f37-a2aa-07e3876a9c2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.108918 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.108961 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.108991 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:14.109042 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-48329b72\x2d7494\x2d43bd\x2d950c\x2dc0505ae6cca4.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ed78c465\x2d2d44\x2d43b6\x2db3e2\x2db04718c04f2d.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6c9d95b5dfc7d99247f13317c5ddfd0c8d869fa31f7f0a3394b9d564de72d98e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5a9eb8287c930fa08d1c01d6a309742cc6c6217ac28a3cfaeb6deee0df25e782-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:16.996665 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:05:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:16.997215470Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:16.997269020Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.009093810Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/de69b07e-dc21-4282-bfbb-ffbe53c58d4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.009115405Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.031567153Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.031604708Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount has successfully entered the 'dead' state. Jan 23 17:05:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount has successfully entered the 'dead' state. Jan 23 17:05:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b8ef9bdc\x2d525b\x2d43ea\x2db676\x2d3acbabd92f44.mount has successfully entered the 'dead' state. Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.073305077Z" level=info msg="runSandbox: deleting pod ID efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396 from idIndex" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.073332470Z" level=info msg="runSandbox: removing pod sandbox efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.073347227Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.073359690Z" level=info msg="runSandbox: unmounting shmPath for sandbox efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.089443198Z" level=info msg="runSandbox: removing pod sandbox from storage: efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.092085109Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.092103397Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=3beb6dce-d39a-4b72-bfc3-f217f974e973 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:17.092290 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:17.092341 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:17.092378 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:17.092437 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:05:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:17.996632 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.996934968Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:17.996970041Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-efb7e96bc246bca7c263ed0af025e3e31c13a281bd7182783d6daacaa1fde396-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:18.011830997Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2922ae2f-e6d7-47f1-9aa7-1c4f8a6a8ffe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:18.011857830Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:19.996687 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:05:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:19.997191 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:05:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:20.996226 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:05:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:20.996533158Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:20.996575201Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:21.007528867Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/ce829084-c2f3-408b-8845-0ef5e5bd16a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:21.007548618Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:21.996478 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:05:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:21.996790730Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:21.996828425Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:22.007107085Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7df85ab8-0a56-4fed-915e-aa88d3de9f0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:22.007126134Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:24.996329 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:05:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:24.996564 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:05:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:24.996663204Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:24.996703095Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:24.996796744Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:24.996828171Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.013228384Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/137e25d9-c8aa-4e1a-b81c-5ed5984f171d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.013248395Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.013897233Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/8fda5b18-ca95-43f5-bf14-b2c5320e79e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.013918311Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:25.995547 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.995873647Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:25.995921046Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.007533728Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5ae15309-ef9e-4860-ac6f-fde13b72090f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.007757062Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733468850Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733503543Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733518729Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733554361Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733505494Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.733637569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.738861100Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.738895796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.739525862Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.739565962Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount has successfully entered the 'dead' state. Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775340481Z" level=info msg="runSandbox: deleting pod ID 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b from idIndex" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775374035Z" level=info msg="runSandbox: removing pod sandbox 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775391135Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775412731Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775340315Z" level=info msg="runSandbox: deleting pod ID b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907 from idIndex" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775451767Z" level=info msg="runSandbox: removing pod sandbox b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775466368Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.775481549Z" level=info msg="runSandbox: unmounting shmPath for sandbox b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.776287132Z" level=info msg="runSandbox: deleting pod ID 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de from idIndex" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.776312444Z" level=info msg="runSandbox: removing pod sandbox 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.776324549Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.776335783Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783272141Z" level=info msg="runSandbox: deleting pod ID fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a from idIndex" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783296567Z" level=info msg="runSandbox: removing pod sandbox fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783308986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783321347Z" level=info msg="runSandbox: unmounting shmPath for sandbox fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783324486Z" level=info msg="runSandbox: deleting pod ID 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e from idIndex" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783353333Z" level=info msg="runSandbox: removing pod sandbox 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783370806Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.783382786Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.799533043Z" level=info msg="runSandbox: removing pod sandbox from storage: 4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.799618142Z" level=info msg="runSandbox: removing pod sandbox from storage: b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.799620495Z" level=info msg="runSandbox: removing pod sandbox from storage: 8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.802086195Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.802104579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=b5929821-cb55-4fcd-a1be-57be1a209abe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.802334 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.802379 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.802401 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.802449 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.805150580Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.805170347Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9ba30a3a-5479-4d28-8f97-853e50a18957 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.805323 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.805357 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.805376 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.805414 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.807494558Z" level=info msg="runSandbox: removing pod sandbox from storage: fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.807532996Z" level=info msg="runSandbox: removing pod sandbox from storage: 8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.808069758Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.808088327Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=93a95e46-c906-42e4-80a7-cfb3673fb9a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.808328 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.808359 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.808381 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.808418 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.814803647Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.814825692Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=02f5a094-f6c1-45fb-9a94-c967ce918813 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.815061 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.815111 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.815140 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.815194 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.817800729Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.817819468Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=24ce8c37-aad6-4337-87fb-a081e30d52ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.818029 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.818065 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.818089 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:26.818129 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.841515 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.841742 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.841823257Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.841851805Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.841836 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.841900 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.841971188Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.841997706Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.842042 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842096072Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842135078Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842142135Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842168928Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842250600Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.842268953Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.866277951Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a3461274-2bc9-4854-ba93-01f53833176b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.866302631Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.870925723Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2e579da6-5ec6-4510-ae84-a2bdc384eef1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.870945270Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.871762877Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9668a10a-2bd4-46b8-aa1e-5a35aeaed543 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.871781477Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.872576793Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c1478442-6080-492f-b4d4-2d7d0095479e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.872598939Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.873328754Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/935f4862-8062-4ed3-b2be-e40ed2a915ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.873350087Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:26.995716 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.996126321Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:26.996167378Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:27.006806432Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1b4067d0-9326-4c77-9911-b52411d0dfd4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:27.006825495Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fc5b7405\x2d0415\x2d49c3\x2d8da1\x2dbdedde06254c.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d5a166c\x2df082\x2d4a78\x2db446\x2d6e10729c7d0f.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c460a6fa\x2dd7a4\x2d479a\x2d8988\x2d8cd767b173c4.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5022b15c\x2d3f74\x2d432e\x2da144\x2d8f15f2b9ff82.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e020089a\x2d7cdb\x2d4d8e\x2da744\x2de8700d8cb417.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fab31364f5f6dd3c56bca76581adaba21ce699f5a0bd61f4276abdd733ffbb2a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8ef473d7a66f356750697c2f82c31b57b24f00224d3bc1f324f62189e674b91e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b33a8740f4e2654fbb4c9ef53197197209de9cae0c1d272d7ce7a9eab70c9907-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4ae26cf970d7f1e1c046fe7650bd4d98655f0a84e7fdda63c5397e22df70e1de-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8af8b754c927b09387ed95dd81e95befc1940af0d10a7d7229404ddecf464b6b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883398 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883419 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883426 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883433 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883438 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883446 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:27.883452 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:05:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:27.891160170Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=12b2e591-a4cf-4661-ab81-59efa599bc0e name=/runtime.v1.ImageService/ImageStatus Jan 23 17:05:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:27.891277954Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=12b2e591-a4cf-4661-ab81-59efa599bc0e name=/runtime.v1.ImageService/ImageStatus Jan 23 17:05:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:28.143382574Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:05:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:31.996493 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:05:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:31.997007992Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:31.997052515Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:05:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:32.007907989Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b22f2b08-18f3-4f31-bcce-dcaac3c3db5f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:32.007928057Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:34.996283 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:05:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:34.996774 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1260] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1265] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1267] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1279] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1280] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1293] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1296] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1296] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1298] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1302] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:05:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493538.1306] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:05:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493540.0591] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:05:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:05:47.997583 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:05:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:05:47.998052 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:05:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:50.021231923Z" level=info msg="NetworkStart: stopping network for sandbox d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:50.021633328Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e3e6874d-afc1-4ff3-a1e0-5fec8a4b6830 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:50.021657901Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:05:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:50.021664282Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:05:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:50.021671513Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:56.021901716Z" level=info msg="NetworkStart: stopping network for sandbox 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:56.022043078Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7e284892-ec07-449c-9d9b-bba0bab38d2c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:56.022065250Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:05:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:56.022072193Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:05:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:56.022078443Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:57.019288889Z" level=info msg="NetworkStart: stopping network for sandbox 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:05:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:57.019433863Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bfe4d983-4180-4e53-b33c-d7a51e7050be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:05:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:57.019458238Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:05:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:57.019465402Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:05:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:57.019472440Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:05:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:05:58.143599447Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:06:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:02.022226426Z" level=info msg="NetworkStart: stopping network for sandbox f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:02.022574253Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/de69b07e-dc21-4282-bfbb-ffbe53c58d4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:02.022596650Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:02.022603481Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:02.022609473Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:02.996101 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:06:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:02.996716 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:06:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:03.026199688Z" level=info msg="NetworkStart: stopping network for sandbox ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:03.026350184Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2922ae2f-e6d7-47f1-9aa7-1c4f8a6a8ffe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:03.026373810Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:03.026381024Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:03.026387098Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.020664632Z" level=info msg="NetworkStart: stopping network for sandbox fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.020802258Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/ce829084-c2f3-408b-8845-0ef5e5bd16a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.020822718Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.020829192Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.020836039Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:06 hub-master-0.workload.bos2.lab conmon[87818]: conmon ac84125bfc286157076c : container 87830 exited with status 1 Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has successfully entered the 'dead' state. Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope: Consumed 3.733s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope completed and consumed the indicated resources. Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope has successfully entered the 'dead' state. Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope: Consumed 53ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5.scope completed and consumed the indicated resources. Jan 23 17:06:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:06.917505 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5" exitCode=1 Jan 23 17:06:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:06.917621 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5} Jan 23 17:06:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:06.917654 8631 scope.go:115] "RemoveContainer" containerID="7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac" Jan 23 17:06:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:06.917943 8631 scope.go:115] "RemoveContainer" containerID="ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5" Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.918377271Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=05a7672f-9df3-47a3-a949-b2cdbc7ce9b8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.918503289Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=05a7672f-9df3-47a3-a949-b2cdbc7ce9b8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.918735088Z" level=info msg="Removing container: 7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac" id=9b84f864-347a-4a51-888b-1bf7ace1aab1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.919003039Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=ea44afa7-b54d-4849-a258-1681c89e1e96 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.919149874Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ea44afa7-b54d-4849-a258-1681c89e1e96 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.919780323Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=22288128-4aa6-4942-9ac8-349263003afd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.919857370Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-3d0cf52538a5fd42725bb0692cdf7dfa06c0afd74a775e9fa2a43a8d40df8624-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-3d0cf52538a5fd42725bb0692cdf7dfa06c0afd74a775e9fa2a43a8d40df8624-merged.mount has successfully entered the 'dead' state. Jan 23 17:06:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:06.963719195Z" level=info msg="Removed container 7a1568c8ffde10fbf461b08ffd663fbda7355012b704dfb0c9e59d7ac7c357ac: openshift-multus/multus-cdt6c/kube-multus" id=9b84f864-347a-4a51-888b-1bf7ace1aab1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:06:06 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope. -- Subject: Unit crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:06:07 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314. -- Subject: Unit crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.019604857Z" level=info msg="NetworkStart: stopping network for sandbox 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.019848068Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7df85ab8-0a56-4fed-915e-aa88d3de9f0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.019873317Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.019881532Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.019888604Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.066803526Z" level=info msg="Created container 4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314: openshift-multus/multus-cdt6c/kube-multus" id=22288128-4aa6-4942-9ac8-349263003afd name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.067250527Z" level=info msg="Starting container: 4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314" id=da2e671c-314b-4690-9ca1-832e65bfaedd name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.086821954Z" level=info msg="Started container" PID=105795 containerID=4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314 description=openshift-multus/multus-cdt6c/kube-multus id=da2e671c-314b-4690-9ca1-832e65bfaedd name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.091020932Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_71d0fe81-8c29-4882-b01a-51457924b156\"" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.101526866Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.101547887Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.112645750Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.122713202Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.122730312Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:06:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:07.122739706Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_71d0fe81-8c29-4882-b01a-51457924b156\"" Jan 23 17:06:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:07.921123 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314} Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027522500Z" level=info msg="NetworkStart: stopping network for sandbox d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027583536Z" level=info msg="NetworkStart: stopping network for sandbox 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027729057Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/137e25d9-c8aa-4e1a-b81c-5ed5984f171d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027741137Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/8fda5b18-ca95-43f5-bf14-b2c5320e79e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027752869Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027761182Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027765372Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027773808Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027780386Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:10.027767967Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.020076538Z" level=info msg="NetworkStart: stopping network for sandbox ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.020234433Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5ae15309-ef9e-4860-ac6f-fde13b72090f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.020257160Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.020266178Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.020273702Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.879726252Z" level=info msg="NetworkStart: stopping network for sandbox 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.879863503Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/a3461274-2bc9-4854-ba93-01f53833176b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.879887431Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.879894186Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.879900659Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884714115Z" level=info msg="NetworkStart: stopping network for sandbox c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884855065Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2e579da6-5ec6-4510-ae84-a2bdc384eef1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884877946Z" level=info msg="NetworkStart: stopping network for sandbox 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884881616Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884995150Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.884998343Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9668a10a-2bd4-46b8-aa1e-5a35aeaed543 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885026820Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885034307Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885040130Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885004995Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885174097Z" level=info msg="NetworkStart: stopping network for sandbox 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885309985Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c1478442-6080-492f-b4d4-2d7d0095479e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885336803Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885343308Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.885349848Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.886667335Z" level=info msg="NetworkStart: stopping network for sandbox a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.886768911Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/935f4862-8062-4ed3-b2be-e40ed2a915ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.886790575Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.886797139Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:11.886803343Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:12.019580758Z" level=info msg="NetworkStart: stopping network for sandbox 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:12.019696951Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/1b4067d0-9326-4c77-9911-b52411d0dfd4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:12.019717238Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:12.019723784Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:12.019730207Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:13.996560 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:06:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:13.997264 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:06:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:17.022930989Z" level=info msg="NetworkStart: stopping network for sandbox c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:17.023076464Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/b22f2b08-18f3-4f31-bcce-dcaac3c3db5f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:17.023102388Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:06:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:17.023110116Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:06:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:17.023117113Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884386 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884406 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884413 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884418 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884424 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884430 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:27.884438 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:06:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:28.143119992Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:06:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:28.996774 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:06:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:28.997288 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.033196094Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.033254298Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount has successfully entered the 'dead' state. Jan 23 17:06:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount has successfully entered the 'dead' state. Jan 23 17:06:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e3e6874d\x2dafc1\x2d4ff3\x2da1e0\x2d5fec8a4b6830.mount has successfully entered the 'dead' state. Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.079368733Z" level=info msg="runSandbox: deleting pod ID d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225 from idIndex" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.079401429Z" level=info msg="runSandbox: removing pod sandbox d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.079416445Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.079428361Z" level=info msg="runSandbox: unmounting shmPath for sandbox d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.099462333Z" level=info msg="runSandbox: removing pod sandbox from storage: d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.102927300Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:35.102945748Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b2413078-ce7f-40ce-a44f-283b4e7a0750 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:35.103175 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:35.103230 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:35.103254 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:35.103306 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(d04cd9edf82abd7ef4880d8ff535b9099da39545fd3ac6f3986945dd6b6ad225): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:06:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:40.996251 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:06:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:40.996806 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.032909696Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.032945575Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount has successfully entered the 'dead' state. Jan 23 17:06:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount has successfully entered the 'dead' state. Jan 23 17:06:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7e284892\x2dec07\x2d449c\x2d9d9b\x2dbba0bab38d2c.mount has successfully entered the 'dead' state. Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.069432609Z" level=info msg="runSandbox: deleting pod ID 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6 from idIndex" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.069458895Z" level=info msg="runSandbox: removing pod sandbox 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.069472061Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.069483249Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.081454010Z" level=info msg="runSandbox: removing pod sandbox from storage: 3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.084802257Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:41.084820529Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=8ef3c07c-ddc0-4d1c-ab64-b35f89014448 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:41.085014 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:41.085052 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:06:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:41.085074 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:06:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:41.085115 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(3535637ba3b62d73c585b8c7ad8ac0716c7f161fb5e5db897d647c94d1d7eac6): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.030156366Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.030198618Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount has successfully entered the 'dead' state. Jan 23 17:06:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount has successfully entered the 'dead' state. Jan 23 17:06:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bfe4d983\x2d4180\x2d4e53\x2db33c\x2dd7a51e7050be.mount has successfully entered the 'dead' state. Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.075412987Z" level=info msg="runSandbox: deleting pod ID 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777 from idIndex" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.075443631Z" level=info msg="runSandbox: removing pod sandbox 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.075461519Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.075475467Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.084469640Z" level=info msg="runSandbox: removing pod sandbox from storage: 8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.087860446Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:42.087879970Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c1e8d2a4-fa3c-4529-8035-60b4ecb81ce1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:42.088081 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:42.088130 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:42.088154 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:42.088215 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8203652ea21523fd54886abe3dd21d4ac5dd1d05b5da31b3116b5c10dd22d777): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.034070036Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.034110532Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount has successfully entered the 'dead' state. Jan 23 17:06:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount has successfully entered the 'dead' state. Jan 23 17:06:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de69b07e\x2ddc21\x2d4282\x2dbfbb\x2dffbe53c58d4b.mount has successfully entered the 'dead' state. Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.074315021Z" level=info msg="runSandbox: deleting pod ID f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a from idIndex" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.074340136Z" level=info msg="runSandbox: removing pod sandbox f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.074355104Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.074369457Z" level=info msg="runSandbox: unmounting shmPath for sandbox f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.087437678Z" level=info msg="runSandbox: removing pod sandbox from storage: f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.090979778Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:47.090997030Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=db95f89e-857d-40a5-a919-2d1a558b6b9f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:47.091184 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:47.091239 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:06:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:47.091262 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:06:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:47.091314 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f43cf69393e437c982ffc22b83b90cb8ae3b75609a953da6ee1ae554c5ab874a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.036980661Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.037016412Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount has successfully entered the 'dead' state. Jan 23 17:06:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount has successfully entered the 'dead' state. Jan 23 17:06:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2922ae2f\x2de6d7\x2d47f1\x2d9aa7\x2d1c4f8a6a8ffe.mount has successfully entered the 'dead' state. Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.088282271Z" level=info msg="runSandbox: deleting pod ID ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5 from idIndex" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.088305661Z" level=info msg="runSandbox: removing pod sandbox ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.088318794Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.088332064Z" level=info msg="runSandbox: unmounting shmPath for sandbox ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.104449274Z" level=info msg="runSandbox: removing pod sandbox from storage: ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.108239106Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.108257065Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d4d97f95-0793-4017-b5c9-077c5b29f8fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:48.108460 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:48.108504 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:48.108530 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:48.108580 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ff60bb41dae2f42a74d28aa857e08b2bbaef73f6d62d0f51475bc767dce7f3b5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:06:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:48.996211 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.996527169Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:48.996567772Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:49.009061511Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/210bc1bd-6cee-4717-b0cd-87f1fcd0152f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:49.009080621Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.031237228Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.031273037Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount has successfully entered the 'dead' state. Jan 23 17:06:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount has successfully entered the 'dead' state. Jan 23 17:06:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ce829084\x2dc2f3\x2d408b\x2d8845\x2d0ef5e5bd16a4.mount has successfully entered the 'dead' state. Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.090308901Z" level=info msg="runSandbox: deleting pod ID fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa from idIndex" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.090333674Z" level=info msg="runSandbox: removing pod sandbox fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.090346546Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.090357742Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.106459848Z" level=info msg="runSandbox: removing pod sandbox from storage: fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.113131546Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:51.113156581Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=f648e21a-5df2-4ce9-b835-2f463340911d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:51.113372 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:51.113418 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:06:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:51.113439 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:06:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:51.113487 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(fb5abfd32d6ceb6b4bc2ade6cb5572c6b1ada856de2525194d02ed9fa3fb5baa): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.032991317Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.033043898Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount has successfully entered the 'dead' state. Jan 23 17:06:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount has successfully entered the 'dead' state. Jan 23 17:06:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7df85ab8\x2d0a56\x2d4fed\x2d915e\x2daa88d3de9f0f.mount has successfully entered the 'dead' state. Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.089317902Z" level=info msg="runSandbox: deleting pod ID 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83 from idIndex" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.089348280Z" level=info msg="runSandbox: removing pod sandbox 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.089364062Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.089378233Z" level=info msg="runSandbox: unmounting shmPath for sandbox 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.106457548Z" level=info msg="runSandbox: removing pod sandbox from storage: 932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.109856240Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:52.109874353Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=2446f56a-2daa-49af-813f-bcaf453ff2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:52.110074 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:52.110122 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:06:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:52.110147 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:06:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:52.110204 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(932ae0bcb028999b1a25c72cfd6622d23e3148cfbc3cad18c31d5b08cb2e4d83): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:06:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:54.996846 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:06:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:54.997370 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.040904027Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.040949850Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.040903014Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.041036928Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8fda5b18\x2dca95\x2d43f5\x2dbf14\x2db2c5320e79e3.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-137e25d9\x2dc8aa\x2d4e1a\x2db81c\x2d5ed5984f171d.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095311372Z" level=info msg="runSandbox: deleting pod ID d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0 from idIndex" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095340617Z" level=info msg="runSandbox: removing pod sandbox d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095353443Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095365932Z" level=info msg="runSandbox: unmounting shmPath for sandbox d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095380274Z" level=info msg="runSandbox: deleting pod ID 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9 from idIndex" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095419227Z" level=info msg="runSandbox: removing pod sandbox 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095439538Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.095454882Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.105453528Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.108698272Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.108716704Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=49c9763f-6528-49d6-93da-4f3cbaf32461 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.108956 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.108995 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.109019 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.109060 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d1bdfe6bf6109a35932ecb450c28beaca722818509b7ad20ad79b020b451fd9): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.113413888Z" level=info msg="runSandbox: removing pod sandbox from storage: d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.116881032Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.116900175Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=9793f2f4-5f6d-4a49-8257-dcfa3b59ac13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.117095 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.117133 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.117155 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:55.117195 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d965e53378ade704e5c743171ae54171a9211a53a08c9d680564957953e341f0): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:06:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:55.996491 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.996886148Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:55.996927215Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.008792651Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/8a7ad037-b423-428c-b685-7aa9001ecbd5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.008812407Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.032001473Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.032053924Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount has successfully entered the 'dead' state. Jan 23 17:06:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount has successfully entered the 'dead' state. Jan 23 17:06:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5ae15309\x2def9e\x2d4860\x2dac6f\x2dfde13b72090f.mount has successfully entered the 'dead' state. Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.086304748Z" level=info msg="runSandbox: deleting pod ID ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1 from idIndex" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.086328896Z" level=info msg="runSandbox: removing pod sandbox ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.086342760Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.086355443Z" level=info msg="runSandbox: unmounting shmPath for sandbox ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.102458672Z" level=info msg="runSandbox: removing pod sandbox from storage: ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.105418938Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.105436953Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=004c8640-d229-42e0-924f-5182567c391c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.105634 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.105679 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.105703 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.105760 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ef0548ab51960e944488148180b083bfcfb197278d004123adab25e37c088cb1): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.890795338Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.890849311Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount has successfully entered the 'dead' state. Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.895356489Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.895387759Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.895878530Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.895916414Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.896500381Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.896530609Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.896836535Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.896866112Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.947380540Z" level=info msg="runSandbox: deleting pod ID 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba from idIndex" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.947416453Z" level=info msg="runSandbox: removing pod sandbox 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.947433782Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.947452491Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955332581Z" level=info msg="runSandbox: deleting pod ID 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2 from idIndex" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955585611Z" level=info msg="runSandbox: removing pod sandbox 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955603350Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955615904Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955332281Z" level=info msg="runSandbox: deleting pod ID a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9 from idIndex" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955706301Z" level=info msg="runSandbox: removing pod sandbox a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955721485Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955734948Z" level=info msg="runSandbox: unmounting shmPath for sandbox a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955332966Z" level=info msg="runSandbox: deleting pod ID 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3 from idIndex" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955825193Z" level=info msg="runSandbox: removing pod sandbox 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955839103Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.955855942Z" level=info msg="runSandbox: unmounting shmPath for sandbox 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.956305931Z" level=info msg="runSandbox: deleting pod ID c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d from idIndex" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.956341042Z" level=info msg="runSandbox: removing pod sandbox c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.956355819Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.956371471Z" level=info msg="runSandbox: unmounting shmPath for sandbox c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.967482005Z" level=info msg="runSandbox: removing pod sandbox from storage: 7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.971168236Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.971187822Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=8317aa01-8fa2-4f8a-a85e-549314fdc261 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.971513 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.971572 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.971596 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.971652 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.971523243Z" level=info msg="runSandbox: removing pod sandbox from storage: c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.971552069Z" level=info msg="runSandbox: removing pod sandbox from storage: 14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.971531029Z" level=info msg="runSandbox: removing pod sandbox from storage: 1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.972427390Z" level=info msg="runSandbox: removing pod sandbox from storage: a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.974817279Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.974836110Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f4490ace-93c8-48a7-82fd-68ef05e28aac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.975113 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.975165 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.975189 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.975241 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.978105868Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.978125510Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=fd834590-a7dc-47f5-a937-fe20b5d7bf4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.978394 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.978431 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.978453 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.978490 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.981007809Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.981027794Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=8a3707c1-96ae-4dde-9e1b-4749594eeb48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.981282 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.981313 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.981334 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.981370 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.983878303Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.983897834Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=53bb5f68-fc87-433c-9170-b0aac5ab9e96 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.984092 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.984125 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.984146 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:56.984184 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:06:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:56.996107 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.996381667Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:56.996412568Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.006841036Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7526dbff-4c22-4944-ade5-279afd8295a7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.006861932Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:57.012303 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012477869Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012502606Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:57.012531 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:57.012613 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:57.012705 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:57.012797 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012873261Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012900542Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012877921Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012950555Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012977014Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012905418Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.013047094Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.012952056Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.030038009Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.030070543Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.030811150Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/15c33310-56a6-4348-b0ef-3aa62dc1a538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.030830929Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.032022739Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9bab1839-7588-407f-95b5-4bd81d023bfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.032043584Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.048962745Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0c1a18c0-c67b-4d6b-ad4a-890ae44854af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.048987931Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.049520949Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f5776def-c0d7-4c0f-9e0e-a24f79ed357b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.049542283Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.051106340Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/de09f7a4-46a6-4e7b-b9af-596b9e496f0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.051126808Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-935f4862\x2d8062\x2d4ed3\x2db2be\x2de40ed2a915ea.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c1478442\x2d6080\x2d492f\x2db4d4\x2d2d7d0095479e.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9668a10a\x2d2bd4\x2d46b8\x2daa1e\x2d5a35aeaed543.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2e579da6\x2d5ec6\x2d4510\x2dae84\x2da2bdc384eef1.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a3461274\x2d2bc9\x2d4854\x2dba93\x2d01f53833176b.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a6508e16ed97c6648719b8d117998810713bc62c02077c14596dca8a979e53a9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1ce88c6bbc170442aa766fa508a1d37c43519f3d4d64e898e0cdd57d90c0e3b2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-14f66b7c1a689425d8e11dab3cdb8ffdc41174f5e6e95b154b336526727a4bf3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c347c3d6a102ba2ebae4137f095b31b6620faa9f295d27efd76e3272a91d4e7d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7ae63939eca28bac123c57ffe72b944812bd8b527e444a2eee4082aa65a6b6ba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1b4067d0\x2d9326\x2d4c77\x2d9911\x2db52411d0dfd4.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.086280493Z" level=info msg="runSandbox: deleting pod ID 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8 from idIndex" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.086307255Z" level=info msg="runSandbox: removing pod sandbox 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.086321878Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.086334625Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.106419673Z" level=info msg="runSandbox: removing pod sandbox from storage: 0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.109314288Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:57.109332112Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=8fa2ebfc-1af2-4123-8b6e-a5b1914ec0c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:57.109580 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:57.109627 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:57.109650 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:06:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:06:57.109696 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(0107deeb5ea6aa4419c42eef0b1eaaa2e5ab7965ca3a62d570a5c1b22206d1f8): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:06:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:58.146751444Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:06:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:06:59.995843 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:06:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:59.996175380Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:06:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:06:59.996230266Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:00.007669213Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/0ba4f186-b28c-4f29-90d7-ea28fc12230f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:00.007689075Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:00.995864 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:07:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:00.996384987Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:00.996422961Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:01.006774174Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/546b3871-f096-47c8-bc25-22ebb77caad4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:01.006797483Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.034388161Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.034430145Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount has successfully entered the 'dead' state. Jan 23 17:07:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount has successfully entered the 'dead' state. Jan 23 17:07:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b22f2b08\x2d18f3\x2d4f31\x2dbcce\x2ddcaac3c3db5f.mount has successfully entered the 'dead' state. Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.078315277Z" level=info msg="runSandbox: deleting pod ID c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254 from idIndex" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.078341070Z" level=info msg="runSandbox: removing pod sandbox c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.078354852Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.078366208Z" level=info msg="runSandbox: unmounting shmPath for sandbox c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.090422121Z" level=info msg="runSandbox: removing pod sandbox from storage: c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.093356654Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.093375321Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=1733099c-2db5-46e7-9aca-b980895f3998 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:02.093555 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:07:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:02.093599 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:07:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:02.093622 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:07:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:02.093672 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c9746e6a986d488e62ee724f2da0dfc1487b97351133e41ac18e36fda941c254): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:07:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:02.996118 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.996506563Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:02.996545298Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:03.007840467Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9e6e171b-ff0c-4c17-9c77-39b15ae0a5f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:03.007859103Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:05.995722 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:07:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:05.996035289Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:05.996074598Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:06.009575516Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6068dc54-38e5-4b2c-b790-97c86d862579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:06.009788972Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:06.996053 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:07:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:06.996457010Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:06.996505489Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.009799827Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/58bb62d3-ae8a-4d02-9487-eb37b2acff67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.009823499Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:07.997013 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:07:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:07.997335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:07:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:07.997456 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:07:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:07.997528 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.997721342Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.997759482Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.997960396Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:07.997994299Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:08.018616615Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5f23eb2e-d5f4-42fc-830d-2fa6961ee29f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:08.018672865Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:08.020178110Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/0851302a-5572-4f26-98b0-6f7aca9ac0c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:08.020202161Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493628.1185] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:07:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493628.1190] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:07:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493628.1191] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:07:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493628.1493] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:07:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493628.1494] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:07:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:09.996569 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:07:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:09.996995882Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:09.997041261Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:10.008958378Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bc09e69a-bec5-4f14-9b0a-d48a06c90154 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:10.008983430Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:14.995696 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:07:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:14.996110344Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:14.996153490Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:15.008368128Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/3e4addca-995d-44bd-829e-35015fb35fb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:15.008388491Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:22.996910 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:07:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:22.997409 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884691 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884711 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884717 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884723 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884729 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884737 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:27.884743 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:07:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:28.143069284Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:07:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:34.022574842Z" level=info msg="NetworkStart: stopping network for sandbox bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:34.022757774Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/210bc1bd-6cee-4717-b0cd-87f1fcd0152f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:34.022782473Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:34.022789957Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:34.022797883Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:37.997406 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:07:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:37.998067 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:07:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:41.023090742Z" level=info msg="NetworkStart: stopping network for sandbox 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:41.023251860Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/8a7ad037-b423-428c-b685-7aa9001ecbd5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:41.023276908Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:41.023283838Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:41.023294110Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.021566690Z" level=info msg="NetworkStart: stopping network for sandbox 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.021717820Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7526dbff-4c22-4944-ade5-279afd8295a7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.021743557Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.021750454Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.021757478Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043317155Z" level=info msg="NetworkStart: stopping network for sandbox f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043463799Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/15c33310-56a6-4348-b0ef-3aa62dc1a538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043488332Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043497147Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043505127Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043490804Z" level=info msg="NetworkStart: stopping network for sandbox 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043726061Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9bab1839-7588-407f-95b5-4bd81d023bfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043748058Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043754906Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.043761762Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062383855Z" level=info msg="NetworkStart: stopping network for sandbox a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062507894Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0c1a18c0-c67b-4d6b-ad4a-890ae44854af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062528298Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062536705Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062542751Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062681407Z" level=info msg="NetworkStart: stopping network for sandbox cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062838776Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f5776def-c0d7-4c0f-9e0e-a24f79ed357b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062861311Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062868256Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.062874123Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.063061103Z" level=info msg="NetworkStart: stopping network for sandbox fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.063178807Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/de09f7a4-46a6-4e7b-b9af-596b9e496f0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.063213073Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.063220856Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:42.063228542Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:45.021929493Z" level=info msg="NetworkStart: stopping network for sandbox 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:45.022080913Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/0ba4f186-b28c-4f29-90d7-ea28fc12230f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:45.022106969Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:45.022113775Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:45.022121938Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:46.020953705Z" level=info msg="NetworkStart: stopping network for sandbox 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:46.021097841Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/546b3871-f096-47c8-bc25-22ebb77caad4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:46.021121423Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:46.021128354Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:46.021134943Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:48.021727515Z" level=info msg="NetworkStart: stopping network for sandbox 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:48.021860447Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9e6e171b-ff0c-4c17-9c77-39b15ae0a5f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:48.021885090Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:48.021893339Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:48.021899689Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:50.997090 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:07:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:50.997868072Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=80392b68-4713-48a9-b8bb-c84565a667f8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:07:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:50.998068636Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=80392b68-4713-48a9-b8bb-c84565a667f8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:07:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:50.998654767Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=18a71a04-1a3d-4035-9f76-e6cfe24fa8be name=/runtime.v1.ImageService/ImageStatus Jan 23 17:07:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:50.998795429Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=18a71a04-1a3d-4035-9f76-e6cfe24fa8be name=/runtime.v1.ImageService/ImageStatus Jan 23 17:07:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:50.999616752Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=35ff8a8b-9e1f-4a70-a183-4f781fb05fc9 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.003225544Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope. -- Subject: Unit crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.024451875Z" level=info msg="NetworkStart: stopping network for sandbox b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.024745541Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6068dc54-38e5-4b2c-b790-97c86d862579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.024769471Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.024776171Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.024782317Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f. -- Subject: Unit crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.126956776Z" level=info msg="Created container b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=35ff8a8b-9e1f-4a70-a183-4f781fb05fc9 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.127483311Z" level=info msg="Starting container: b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" id=86396407-9cdb-4f24-9eb8-8df04f5e7f04 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.146482717Z" level=info msg="Started container" PID=108992 containerID=b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=86396407-9cdb-4f24-9eb8-8df04f5e7f04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.151188045Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.161836898Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.161857087Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.161869480Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.170997230Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.171016014Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.171032013Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.179620117Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.179635956Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.179645527Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.188674831Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.188691348Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.188700904Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.197871193Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:07:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:51.197889723Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:07:51 hub-master-0.workload.bos2.lab conmon[108969]: conmon b1ca34f564a47d70be58 : container 108992 exited with status 1 Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has successfully entered the 'dead' state. Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope: Consumed 565ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope completed and consumed the indicated resources. Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope has successfully entered the 'dead' state. Jan 23 17:07:51 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f.scope completed and consumed the indicated resources. Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.023889682Z" level=info msg="NetworkStart: stopping network for sandbox d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.024029417Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/58bb62d3-ae8a-4d02-9487-eb37b2acff67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.024050459Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.024058099Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.024064889Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.117024 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/189.log" Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.117481 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/188.log" Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.118019 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" exitCode=1 Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.118043 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f} Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.118067 8631 scope.go:115] "RemoveContainer" containerID="6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.118790356Z" level=info msg="Removing container: 6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685" id=886de680-c486-4a3b-993a-d7099625b484 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:52.118944 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:07:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:52.119467 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:07:52 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-629cda8399c6b97fd114816a7211193114fd4629f92a5f629c9aa5dfa465445f-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-629cda8399c6b97fd114816a7211193114fd4629f92a5f629c9aa5dfa465445f-merged.mount has successfully entered the 'dead' state. Jan 23 17:07:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:52.158736226Z" level=info msg="Removed container 6733411a4ba236496f97962d86995d9afd688700ba0e917868297523e1824685: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=886de680-c486-4a3b-993a-d7099625b484 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033114096Z" level=info msg="NetworkStart: stopping network for sandbox 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033238901Z" level=info msg="NetworkStart: stopping network for sandbox 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033252869Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/0851302a-5572-4f26-98b0-6f7aca9ac0c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033322821Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033329242Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033337175Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033392711Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/5f23eb2e-d5f4-42fc-830d-2fa6961ee29f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033418803Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033425867Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:53.033433184Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:53.121452 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/189.log" Jan 23 17:07:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:55.022004891Z" level=info msg="NetworkStart: stopping network for sandbox 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:07:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:55.022384607Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bc09e69a-bec5-4f14-9b0a-d48a06c90154 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:07:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:55.022407774Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:07:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:55.022415025Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:07:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:55.022424238Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:07:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:55.668037 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:07:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:07:55.668929 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:07:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:07:55.669412 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:07:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:07:58.143576448Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:08:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:00.022789532Z" level=info msg="NetworkStart: stopping network for sandbox 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:00.022926565Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/3e4addca-995d-44bd-829e-35015fb35fb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:00.022949675Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:08:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:00.022956158Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:08:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:00.022963903Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:08.996477 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:08:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:08.997125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.034233593Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.034274422Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount has successfully entered the 'dead' state. Jan 23 17:08:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount has successfully entered the 'dead' state. Jan 23 17:08:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-210bc1bd\x2d6cee\x2d4717\x2db0cd\x2d87f1fcd0152f.mount has successfully entered the 'dead' state. Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.074372189Z" level=info msg="runSandbox: deleting pod ID bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264 from idIndex" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.074396397Z" level=info msg="runSandbox: removing pod sandbox bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.074412102Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.074425631Z" level=info msg="runSandbox: unmounting shmPath for sandbox bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.094459344Z" level=info msg="runSandbox: removing pod sandbox from storage: bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.098125243Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:19.098144336Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=b66da970-faf0-435d-b350-175c8106cc84 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:19.098392 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:19.098441 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:19.098464 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:19.098511 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bce5738596853915082236491f0d2da0b2b986186680a7926bbfec32c2479264): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:08:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:22.996583 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:08:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:22.997084 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.034247470Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.034505414Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount has successfully entered the 'dead' state. Jan 23 17:08:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount has successfully entered the 'dead' state. Jan 23 17:08:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8a7ad037\x2db423\x2d428c\x2db685\x2d7aa9001ecbd5.mount has successfully entered the 'dead' state. Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.074303558Z" level=info msg="runSandbox: deleting pod ID 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db from idIndex" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.074330022Z" level=info msg="runSandbox: removing pod sandbox 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.074345045Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.074356769Z" level=info msg="runSandbox: unmounting shmPath for sandbox 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.086435893Z" level=info msg="runSandbox: removing pod sandbox from storage: 08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.092900761Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:26.092923398Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4a4d5116-3dc9-41ad-b837-eab7a7f5788d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:26.093119 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:26.093167 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:08:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:26.093188 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:08:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:26.093242 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(08623f992a7bcd2061b7725953c3f73777ec01b392c7f9c7ee8e21d84cbe87db): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.032746257Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.032788064Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.053068539Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.053100158Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.054151667Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.054187668Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7526dbff\x2d4c22\x2d4944\x2dade5\x2d279afd8295a7.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.073434793Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.073462606Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.073829148Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.073855954Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount has successfully entered the 'dead' state. Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.074128982Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.074170293Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.076513513Z" level=info msg="runSandbox: deleting pod ID 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275 from idIndex" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.076543539Z" level=info msg="runSandbox: removing pod sandbox 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.076560257Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.076574667Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.087451389Z" level=info msg="runSandbox: removing pod sandbox from storage: 1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.091017185Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.091038047Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=f85c08e6-8253-4b08-a27b-aceef2dbace6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.091309 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.091364 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.091388 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.091444 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.112282679Z" level=info msg="runSandbox: deleting pod ID 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89 from idIndex" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.112313597Z" level=info msg="runSandbox: removing pod sandbox 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.112327058Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.112340542Z" level=info msg="runSandbox: unmounting shmPath for sandbox 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.118304289Z" level=info msg="runSandbox: deleting pod ID f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871 from idIndex" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.118329719Z" level=info msg="runSandbox: removing pod sandbox f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.118342616Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.118375427Z" level=info msg="runSandbox: unmounting shmPath for sandbox f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.122308039Z" level=info msg="runSandbox: deleting pod ID a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29 from idIndex" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.122330957Z" level=info msg="runSandbox: removing pod sandbox a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.122343208Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.122364192Z" level=info msg="runSandbox: unmounting shmPath for sandbox a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126315337Z" level=info msg="runSandbox: deleting pod ID cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880 from idIndex" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126329165Z" level=info msg="runSandbox: deleting pod ID fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186 from idIndex" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126358750Z" level=info msg="runSandbox: removing pod sandbox fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126373923Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126385783Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126341781Z" level=info msg="runSandbox: removing pod sandbox cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126428960Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.126442078Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.127441989Z" level=info msg="runSandbox: removing pod sandbox from storage: 85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.130900753Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.130919447Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=cfe38730-249e-42b9-8b9c-fe3e324a9d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.131100 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.131137 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.131158 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.131197 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.136469300Z" level=info msg="runSandbox: removing pod sandbox from storage: f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.139808774Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.139827760Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f1002205-4b24-4e9b-a9ae-fcf641f89024 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.139987 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.140019 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.140040 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.140075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.144424645Z" level=info msg="runSandbox: removing pod sandbox from storage: a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.147697199Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.147716373Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=55dba48d-b2b7-45cc-a372-0924896b7c37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.147883 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.147917 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.147939 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.147977 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.151461892Z" level=info msg="runSandbox: removing pod sandbox from storage: fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.153439533Z" level=info msg="runSandbox: removing pod sandbox from storage: cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.154770128Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.154789529Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d0f70872-d062-4200-b64a-e347bdf62b98 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.154995 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.155024 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.155044 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.155080 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.158004028Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.158024293Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=adb708d0-43a6-425a-91ef-942b69dd6d47 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.158235 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.158269 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.158291 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:27.158328 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.183139 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.183292 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183445313Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183476627Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.183504 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183566374Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183591468Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.183708 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183823219Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183851182Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183875428Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.183902892Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.183984 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.184366217Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.184397616Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.211821685Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/fb2b65de-aeb3-4f82-8ca6-1035fa356e14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.211847137Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.214172662Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/ee0f6c03-348a-480d-995f-76b670e00edc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.214194153Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.216759969Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/40048b6b-f21c-4d08-9e06-36de5805bbf2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.216781127Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.218244843Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/29137f01-02b6-4e1a-af86-354d977512f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.218269458Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.218928911Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/2fc52a70-5f72-4085-a886-c4d8880a6855 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:27.218952045Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885825 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885843 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885850 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885855 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885862 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885868 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:27.885877 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-de09f7a4\x2d46a6\x2d4e7b\x2db9af\x2d596b9e496f0e.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f5776def\x2dc0d7\x2d4c0f\x2d9e0e\x2da24f79ed357b.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0c1a18c0\x2dc67b\x2d4d6b\x2dad4a\x2d890ae44854af.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a1ac375c55315ece863299f0a3c49b676abf627a8c9b07e62a509b1a32a17f29-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fb3dcbda14d856465087f1ecf127a1ae89edbffe1d054427bbed424c78574186-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cc41b4058a3d52884471a8bbfb4eea979668d2fd6cf9d25ec3188057f8c48880-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9bab1839\x2d7588\x2d407f\x2d95b5\x2d4bd81d023bfc.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-15c33310\x2d56a6\x2d4348\x2db0ef\x2d3aa62dc1a538.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-85fad2b25263d770a02ce0f2c77e6074206ae0541038a305fffb1b0d4f7a0d89-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f52959d98675c71aebe0cd839e509f39ff26171ece79de5a5169df7103c64871-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1d7a904b44bd6cfa6148ecab53afb6c342a2571da157f49d56f0d40bb08f8275-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:28.142490405Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.033518858Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.033554645Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount has successfully entered the 'dead' state. Jan 23 17:08:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount has successfully entered the 'dead' state. Jan 23 17:08:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0ba4f186\x2db28c\x2d4f29\x2d90d7\x2dea28fc12230f.mount has successfully entered the 'dead' state. Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.072348591Z" level=info msg="runSandbox: deleting pod ID 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0 from idIndex" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.072373075Z" level=info msg="runSandbox: removing pod sandbox 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.072385836Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.072401554Z" level=info msg="runSandbox: unmounting shmPath for sandbox 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.084431975Z" level=info msg="runSandbox: removing pod sandbox from storage: 78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.087656930Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.087676114Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=3088abe5-80ed-4820-a6dc-861bb7d92dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:30.088052 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:30.088222 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:08:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:30.088243 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:08:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:30.088296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(78e636a93a349ff60dfcad670282ce312aa00ea2c69be93cacfb67ae98879fa0): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:08:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:30.995706 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.996030255Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:30.996070882Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.006832390Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/9afe05e5-e440-48cc-a995-428c909937a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.006852306Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.031786860Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.031816863Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount has successfully entered the 'dead' state. Jan 23 17:08:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount has successfully entered the 'dead' state. Jan 23 17:08:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-546b3871\x2df096\x2d47c8\x2dbc25\x2d22ebb77caad4.mount has successfully entered the 'dead' state. Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.079319815Z" level=info msg="runSandbox: deleting pod ID 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5 from idIndex" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.079351386Z" level=info msg="runSandbox: removing pod sandbox 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.079364291Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.079376720Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.091425181Z" level=info msg="runSandbox: removing pod sandbox from storage: 2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.094326549Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:31.094346091Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e97012d2-5af1-4cde-b50f-673e55c609e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:31.094549 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:31.094593 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:31.094615 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:31.094661 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(2fdfb2b2324b4717668298e9de05672db6a25d689f26a3c47047110ec9119bd5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.033514601Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.033556721Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount has successfully entered the 'dead' state. Jan 23 17:08:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount has successfully entered the 'dead' state. Jan 23 17:08:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9e6e171b\x2dff0c\x2d4c17\x2d9c77\x2d39b15ae0a5f4.mount has successfully entered the 'dead' state. Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.071306446Z" level=info msg="runSandbox: deleting pod ID 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2 from idIndex" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.071330313Z" level=info msg="runSandbox: removing pod sandbox 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.071347429Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.071359102Z" level=info msg="runSandbox: unmounting shmPath for sandbox 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.083438457Z" level=info msg="runSandbox: removing pod sandbox from storage: 596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.086964544Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:33.086982811Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1fbf9976-9245-4475-9e3d-ee4a852210e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:33.087270 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:33.087319 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:08:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:33.087341 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:08:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:33.087392 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(596404d86dd0c42f6fad9bce26caf4aadbdf6923d213decd6ac42dac196fd2a2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.036590067Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.036628457Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount has successfully entered the 'dead' state. Jan 23 17:08:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount has successfully entered the 'dead' state. Jan 23 17:08:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6068dc54\x2d38e5\x2d4b2c\x2db790\x2d97c86d862579.mount has successfully entered the 'dead' state. Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.073306643Z" level=info msg="runSandbox: deleting pod ID b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4 from idIndex" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.073331485Z" level=info msg="runSandbox: removing pod sandbox b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.073347442Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.073361946Z" level=info msg="runSandbox: unmounting shmPath for sandbox b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.089466449Z" level=info msg="runSandbox: removing pod sandbox from storage: b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.092973754Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:36.092991247Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=22395b2e-5a1d-479c-a3e7-17e3f76ae2da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:36.093231 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:36.093274 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:36.093296 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:36.093344 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b32f68c20bf7aafd9f2354873dd49f6684e0c4777b3fe681dd53fba620a029f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:36.996522 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:08:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:36.997060 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.035586744Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.035617672Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount has successfully entered the 'dead' state. Jan 23 17:08:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount has successfully entered the 'dead' state. Jan 23 17:08:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-58bb62d3\x2dae8a\x2d4d02\x2d9487\x2deb37b2acff67.mount has successfully entered the 'dead' state. Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.070311950Z" level=info msg="runSandbox: deleting pod ID d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90 from idIndex" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.070335545Z" level=info msg="runSandbox: removing pod sandbox d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.070348451Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.070358979Z" level=info msg="runSandbox: unmounting shmPath for sandbox d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.082435504Z" level=info msg="runSandbox: removing pod sandbox from storage: d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.085923870Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:37.085943828Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=4c2ff14c-2768-472f-a7d2-60ff4892686e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:37.086146 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:37.086185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:08:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:37.086211 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:08:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:37.086250 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(d523a019ff8c305559883187c2e0193a07cb9028e4fc1a64debe3408e3a84b90): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.043783437Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.043818326Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.045203804Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.045247009Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount has successfully entered the 'dead' state. Jan 23 17:08:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount has successfully entered the 'dead' state. Jan 23 17:08:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount has successfully entered the 'dead' state. Jan 23 17:08:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount has successfully entered the 'dead' state. Jan 23 17:08:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0851302a\x2d5572\x2d4f26\x2d98b0\x2d6f7aca9ac0c9.mount has successfully entered the 'dead' state. Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.082305698Z" level=info msg="runSandbox: deleting pod ID 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81 from idIndex" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.082333384Z" level=info msg="runSandbox: removing pod sandbox 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.082346788Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.082359947Z" level=info msg="runSandbox: unmounting shmPath for sandbox 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.086312836Z" level=info msg="runSandbox: deleting pod ID 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8 from idIndex" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.086344342Z" level=info msg="runSandbox: removing pod sandbox 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.086362029Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.086376059Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.098433397Z" level=info msg="runSandbox: removing pod sandbox from storage: 01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.101696763Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.101714833Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=644ed0f5-e78b-4ea7-a101-271965abb9c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.101932 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.101974 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.101995 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.102036 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.102432913Z" level=info msg="runSandbox: removing pod sandbox from storage: 35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.105813480Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:38.105834079Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9396572d-e27d-4368-b1d0-bd9c0c3e8ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.106077 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.106118 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.106141 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:38.106187 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:08:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f23eb2e\x2dd5f4\x2d42fc\x2d830d\x2d2fa6961ee29f.mount has successfully entered the 'dead' state. Jan 23 17:08:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-01391459b476565dd546ecaa1cbf27d38559c263cf201c3734bc9ef48e564a81-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-35aaca5704d3f598c5ec39ebe91c9bf0d785ed76f15153ff04da4bf4b6358ba8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.034083527Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.034337270Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount has successfully entered the 'dead' state. Jan 23 17:08:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount has successfully entered the 'dead' state. Jan 23 17:08:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bc09e69a\x2dbec5\x2d4f14\x2d9b0a\x2dd48a06c90154.mount has successfully entered the 'dead' state. Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.076308352Z" level=info msg="runSandbox: deleting pod ID 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363 from idIndex" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.076332286Z" level=info msg="runSandbox: removing pod sandbox 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.076346184Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.076356947Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.104441392Z" level=info msg="runSandbox: removing pod sandbox from storage: 4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.107826831Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.107846647Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=c6681773-ab22-41bd-97e5-9e66c94641e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:40.108072 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:40.108114 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:40.108138 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:40.108187 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(4cc2a474c25e6fe6ad745e121280edfa0c6bc60fb221393e470fe60460a3b363): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:40.995836 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:40.995944 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.996182456Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.996255118Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.996271333Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:40.996304478Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:41.011378214Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/004ec2bf-1da5-4f0b-a84f-94d1692f627d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:41.011398372Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:41.012028679Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/92adfd3f-0d7c-4a29-a949-2166c40c0497 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:41.012050137Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:43.996243 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:08:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:43.996565612Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:43.996610704Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:44.007366333Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9535e823-12bf-4eaa-8173-e22dbacf28b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:44.007386785Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:44.996295 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:44.996623558Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:44.996663964Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.007836087Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/cff6cd89-f454-495e-ae4c-9ab2a2f2074b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.007864085Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.034169152Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.034200516Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount has successfully entered the 'dead' state. Jan 23 17:08:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount has successfully entered the 'dead' state. Jan 23 17:08:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3e4addca\x2d995d\x2d44bd\x2d829e\x2d35015fb35fb7.mount has successfully entered the 'dead' state. Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.078303632Z" level=info msg="runSandbox: deleting pod ID 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562 from idIndex" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.078327196Z" level=info msg="runSandbox: removing pod sandbox 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.078339988Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.078352074Z" level=info msg="runSandbox: unmounting shmPath for sandbox 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.094427398Z" level=info msg="runSandbox: removing pod sandbox from storage: 26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.097172238Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:45.097191172Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a4358c14-587f-4723-9021-7ffd29ab26bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:45.097450 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:08:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:45.097492 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:45.097515 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:45.097564 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:08:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-26c0c9cf05f8ee074bc4b5b89a61db2980ae2e48850b92464f5af901332ea562-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:08:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:46.996036 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:08:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:46.996418447Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:46.996458794Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:47.009337292Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9aaff16c-bd1f-4238-8c63-1c4b602b1050 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:47.009356622Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:48.996173 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:08:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:48.996539481Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:48.996577105Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:49.007721557Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/2b822f7c-74e8-4c0a-bbdb-7d1635c20d82 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:49.007745640Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:49.996329 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:08:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:49.996644225Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:49.996679908Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:49.997202 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:08:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:08:49.997704 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:08:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:50.012441181Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/e1fc3787-8f6a-466a-8f81-0c6247066a27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:50.012469671Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:50.995671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:08:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:50.996080029Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:50.996117339Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:51.007324859Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/eee7a9e6-626c-40fa-a1e0-92abdde40ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:51.007348214Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:52.996275 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:08:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:52.996445 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:52.996709155Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:52.996765932Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:52.996775808Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:52.996807082Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:53.011796900Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/25685d84-3065-40ec-912d-6d39d3021006 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:53.011816720Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:53.012389137Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d81d36fc-3901-4cd5-b16d-ac4e1f1a18e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:53.012406250Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:08:55.995647 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:08:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:55.996016494Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:08:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:55.996068496Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:08:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:56.007348997Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/34d305d2-d42c-456b-a373-37ae1a992afd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:08:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:56.007554090Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:08:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:08:58.143321018Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:09:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:02.996296 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:09:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:02.996797 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.225828273Z" level=info msg="NetworkStart: stopping network for sandbox 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.225980806Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/fb2b65de-aeb3-4f82-8ca6-1035fa356e14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.226004016Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.226010692Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.226017118Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.228350424Z" level=info msg="NetworkStart: stopping network for sandbox f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.228502236Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/ee0f6c03-348a-480d-995f-76b670e00edc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.228528365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.228536097Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.228542242Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230255787Z" level=info msg="NetworkStart: stopping network for sandbox 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230376347Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/40048b6b-f21c-4d08-9e06-36de5805bbf2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230398432Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230406212Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230413137Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230605816Z" level=info msg="NetworkStart: stopping network for sandbox 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230724572Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/29137f01-02b6-4e1a-af86-354d977512f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230747548Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230755274Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.230762488Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.232495719Z" level=info msg="NetworkStart: stopping network for sandbox e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.232598793Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/2fc52a70-5f72-4085-a886-c4d8880a6855 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.232620196Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.232627862Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:12.232633991Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:16.019892812Z" level=info msg="NetworkStart: stopping network for sandbox f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:16.020030846Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/9afe05e5-e440-48cc-a995-428c909937a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:16.020054147Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:16.020061023Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:16.020068553Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:16.996236 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:09:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:16.996874 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.024311166Z" level=info msg="NetworkStart: stopping network for sandbox c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.024452973Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/004ec2bf-1da5-4f0b-a84f-94d1692f627d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.024474302Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.024480935Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.024487618Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.025640053Z" level=info msg="NetworkStart: stopping network for sandbox d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.025740781Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/92adfd3f-0d7c-4a29-a949-2166c40c0497 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.025759788Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.025767834Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:26.025774044Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886092 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886115 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886122 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886129 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886136 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886144 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:27.886150 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:09:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:28.142555864Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:09:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:28.996536 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:09:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:28.997033 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:09:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:29.021347494Z" level=info msg="NetworkStart: stopping network for sandbox a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:29.021487419Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9535e823-12bf-4eaa-8173-e22dbacf28b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:29.021510860Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:29.021516995Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:29.021523486Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:30.021967727Z" level=info msg="NetworkStart: stopping network for sandbox 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:30.022127641Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/cff6cd89-f454-495e-ae4c-9ab2a2f2074b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:30.022152833Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:30.022160552Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:30.022168575Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:32.022299072Z" level=info msg="NetworkStart: stopping network for sandbox 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:32.022441751Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9aaff16c-bd1f-4238-8c63-1c4b602b1050 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:32.022466179Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:32.022473483Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:32.022479965Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:34.022140585Z" level=info msg="NetworkStart: stopping network for sandbox 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:34.022294209Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/2b822f7c-74e8-4c0a-bbdb-7d1635c20d82 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:34.022318712Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:34.022324882Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:34.022331474Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:35.024241341Z" level=info msg="NetworkStart: stopping network for sandbox 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:35.024432511Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/e1fc3787-8f6a-466a-8f81-0c6247066a27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:35.024457410Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:35.024464176Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:35.024472376Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:36.021076457Z" level=info msg="NetworkStart: stopping network for sandbox 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:36.021225431Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/eee7a9e6-626c-40fa-a1e0-92abdde40ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:36.021249010Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:36.021257242Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:36.021263974Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023450974Z" level=info msg="NetworkStart: stopping network for sandbox e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023659329Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d81d36fc-3901-4cd5-b16d-ac4e1f1a18e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023682465Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023689579Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023695890Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023865575Z" level=info msg="NetworkStart: stopping network for sandbox c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.023978229Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/25685d84-3065-40ec-912d-6d39d3021006 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.024001357Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.024008921Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:38.024015478Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:41.021738726Z" level=info msg="NetworkStart: stopping network for sandbox 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:41.021878868Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/34d305d2-d42c-456b-a373-37ae1a992afd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:41.021902393Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:09:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:41.021909088Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:09:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:41.021915123Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:41.996553 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:09:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:41.997061 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:09:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:55.996687 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:09:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:55.997378 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.238754068Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.238794388Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.239213271Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.239261012Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.240486851Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.240525174Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.240609885Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.240641102Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.243016281Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.243047930Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount has successfully entered the 'dead' state. Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.292373572Z" level=info msg="runSandbox: deleting pod ID f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74 from idIndex" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.292402301Z" level=info msg="runSandbox: removing pod sandbox f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.292419531Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.292442905Z" level=info msg="runSandbox: unmounting shmPath for sandbox f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.293285615Z" level=info msg="runSandbox: deleting pod ID 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f from idIndex" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.293315585Z" level=info msg="runSandbox: removing pod sandbox 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.293331912Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.293343451Z" level=info msg="runSandbox: unmounting shmPath for sandbox 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300309055Z" level=info msg="runSandbox: deleting pod ID 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df from idIndex" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300338424Z" level=info msg="runSandbox: removing pod sandbox 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300352394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300368940Z" level=info msg="runSandbox: unmounting shmPath for sandbox 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300310861Z" level=info msg="runSandbox: deleting pod ID e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719 from idIndex" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300419748Z" level=info msg="runSandbox: removing pod sandbox e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300434951Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.300451791Z" level=info msg="runSandbox: unmounting shmPath for sandbox e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.301277453Z" level=info msg="runSandbox: deleting pod ID 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b from idIndex" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.301303141Z" level=info msg="runSandbox: removing pod sandbox 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.301316595Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.301332212Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.308471350Z" level=info msg="runSandbox: removing pod sandbox from storage: 846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.308509892Z" level=info msg="runSandbox: removing pod sandbox from storage: f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.311272119Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.311291897Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f1f6dfd8-31eb-438f-a647-4c32eca310fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.311518 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.311562 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.311585 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.311629 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.314322034Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.314339921Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=64df8538-2b29-4a42-9f76-52f2dc07bab8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.314522 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.314562 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.314583 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.314632 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.316489808Z" level=info msg="runSandbox: removing pod sandbox from storage: 412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.316512587Z" level=info msg="runSandbox: removing pod sandbox from storage: 3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.316515131Z" level=info msg="runSandbox: removing pod sandbox from storage: e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.319769478Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.319787137Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=3fa6ba63-0301-4309-ad16-c17dba632190 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.320018 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.320056 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.320082 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.320121 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.326968043Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.326993384Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3a34f78b-02e2-4481-90d8-cc78600ed860 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.327237 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.327276 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.327297 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.327337 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.330048407Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.330066130Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b561ff85-067a-4fb0-956f-7ad5c5ee7a01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.330287 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.330321 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.330343 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:09:57.330387 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:57.350007 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:57.350150 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:57.350150 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:57.350339 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:09:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:09:57.350366 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350397774Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350430202Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350556761Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350584976Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350621915Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350652633Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350740170Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350765913Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350811457Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.350828968Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.369757225Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/8a7c962c-6d09-40be-9f1e-e0126502f50b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.369778497Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.370500944Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/799592be-cd55-49d6-8a00-0bd07d1a444c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.370519815Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.377440542Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c7f4937e-20fe-4b83-94ee-35ba3006b334 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.377460025Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.381806616Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/ea61a664-9879-4add-8b61-497dc755e3fa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.381825157Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.384423911Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f12b0b52-6172-410d-b0ff-fb856f3e1dde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:09:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:57.384445898Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:09:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:09:58.143161015Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2fc52a70\x2d5f72\x2d4085\x2da886\x2dc4d8880a6855.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-29137f01\x2d02b6\x2d4e1a\x2daf86\x2d354d977512f2.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-40048b6b\x2df21c\x2d4d08\x2d9e06\x2d36de5805bbf2.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee0f6c03\x2d348a\x2d480d\x2d995f\x2d76b670e00edc.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fb2b65de\x2daeb3\x2d4f82\x2d8ca6\x2d1035fa356e14.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f9c66e6310cfed9b30d4b247c7bd8b2fffef7dff516976e8464d9dbb19343f74-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e74689f1f0e6ba4f0d977c92a5c7923c6ed89b221bbabd3a85f6d1682f570719-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3103ca81c25b0718cd5a28b446e88c9891c714a65d3f1b8d8e70eb6f911d508b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-412c6bf4882ac44fb06785b8bea5350ca8d290af815793a587dffa8bf1aa72df-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:09:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-846588eed8be9920a79e404df46157b570ef9555de2ded8e398be2d501c8450f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.031161417Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.031203383Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount has successfully entered the 'dead' state. Jan 23 17:10:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount has successfully entered the 'dead' state. Jan 23 17:10:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9afe05e5\x2de440\x2d48cc\x2da995\x2d428c909937a6.mount has successfully entered the 'dead' state. Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.075318990Z" level=info msg="runSandbox: deleting pod ID f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64 from idIndex" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.075344918Z" level=info msg="runSandbox: removing pod sandbox f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.075358977Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.075374059Z" level=info msg="runSandbox: unmounting shmPath for sandbox f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.091439491Z" level=info msg="runSandbox: removing pod sandbox from storage: f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.094303691Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:01.094322719Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c2fd260a-bea8-42d3-bae8-4ed4317be9a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:01.094538 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:01.094584 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:01.094620 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:01.094667 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f64bfccedaff70a4097416be610154383585c22a50ceba36d98263ef4f27dc64): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:10:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:06.996921 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:10:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:06.997483 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.037236578Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.037287019Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.037270266Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.037381450Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-92adfd3f\x2d0d7c\x2d4a29\x2da949\x2d2166c40c0497.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-004ec2bf\x2d1da5\x2d4f0b\x2da84f\x2d94d1692f627d.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076337022Z" level=info msg="runSandbox: deleting pod ID c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a from idIndex" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076366414Z" level=info msg="runSandbox: removing pod sandbox c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076381338Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076337882Z" level=info msg="runSandbox: deleting pod ID d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52 from idIndex" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076408073Z" level=info msg="runSandbox: removing pod sandbox d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076417817Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076431561Z" level=info msg="runSandbox: unmounting shmPath for sandbox c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.076493676Z" level=info msg="runSandbox: unmounting shmPath for sandbox d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.092453208Z" level=info msg="runSandbox: removing pod sandbox from storage: d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.093455789Z" level=info msg="runSandbox: removing pod sandbox from storage: c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.096124769Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.096143164Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=26f1afbc-538f-4bea-995b-5c1c3911e52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.096508 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.096670 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.096691 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.096743 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.099623767Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.099650412Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=2af34352-4ce7-475c-b701-00e0ee96ad3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.099875 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.099911 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.099937 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:11.099986 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d0555a71bd1a6058c9af450c3ab3515c640cb8de0c99e7f30bc118f940b83e52-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c6cc03387c7e14b0ade03c158ebe5be20c123be7f738fe4ff36c01b3772f9f5a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:11.995655 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.996087034Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:11.996132194Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:12.008018949Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/4a937165-55c5-44f4-a8be-8b8dc320abfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:12.008043505Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.031814947Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.031851466Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount has successfully entered the 'dead' state. Jan 23 17:10:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount has successfully entered the 'dead' state. Jan 23 17:10:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9535e823\x2d12bf\x2d4eaa\x2d8173\x2de22dbacf28b2.mount has successfully entered the 'dead' state. Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.081308645Z" level=info msg="runSandbox: deleting pod ID a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8 from idIndex" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.081333253Z" level=info msg="runSandbox: removing pod sandbox a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.081346714Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.081368242Z" level=info msg="runSandbox: unmounting shmPath for sandbox a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.097431641Z" level=info msg="runSandbox: removing pod sandbox from storage: a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.100169691Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:14.100188619Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=474cc146-bc83-48db-a564-534054385b63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:14.100402 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:14.100460 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:10:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:14.100483 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:10:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:14.100543 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:10:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a756bfb1f11c8a17b441bd9988e57c3956b3fd344a5f61af78372cab0a2e1ce8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.032581865Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.032616255Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount has successfully entered the 'dead' state. Jan 23 17:10:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount has successfully entered the 'dead' state. Jan 23 17:10:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cff6cd89\x2df454\x2d495e\x2dae4c\x2d9ab2a2f2074b.mount has successfully entered the 'dead' state. Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.074315493Z" level=info msg="runSandbox: deleting pod ID 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470 from idIndex" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.074339578Z" level=info msg="runSandbox: removing pod sandbox 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.074353012Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.074365297Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.091439036Z" level=info msg="runSandbox: removing pod sandbox from storage: 6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.094717134Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:15.094735920Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=25acd835-1605-420a-94da-0c93ee7d3308 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:15.094922 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:15.094966 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:15.095000 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:15.095046 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6a33fdb822fffa6795cf3d534e835ac425082726f5bb967d39c1d753187c2470): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.033388055Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.033423211Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount has successfully entered the 'dead' state. Jan 23 17:10:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount has successfully entered the 'dead' state. Jan 23 17:10:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9aaff16c\x2dbd1f\x2d4238\x2d8c63\x2d1c4b602b1050.mount has successfully entered the 'dead' state. Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.071305843Z" level=info msg="runSandbox: deleting pod ID 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37 from idIndex" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.071329693Z" level=info msg="runSandbox: removing pod sandbox 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.071342980Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.071354629Z" level=info msg="runSandbox: unmounting shmPath for sandbox 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.087440196Z" level=info msg="runSandbox: removing pod sandbox from storage: 30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.090682439Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:17.090701087Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5a1d674d-123e-4783-b453-f1276baea35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:17.090939 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:17.090988 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:10:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:17.091010 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:10:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:17.091055 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(30443fa5b8f34a0ad21decaad94a3f9dad2e6ca23b1fb25837aa301c10bdab37): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.033991837Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.034027166Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount has successfully entered the 'dead' state. Jan 23 17:10:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount has successfully entered the 'dead' state. Jan 23 17:10:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2b822f7c\x2d74e8\x2d4c0a\x2dbbdb\x2d7d1635c20d82.mount has successfully entered the 'dead' state. Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.081309477Z" level=info msg="runSandbox: deleting pod ID 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b from idIndex" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.081333717Z" level=info msg="runSandbox: removing pod sandbox 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.081347775Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.081360926Z" level=info msg="runSandbox: unmounting shmPath for sandbox 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.097406848Z" level=info msg="runSandbox: removing pod sandbox from storage: 45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.100652317Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:19.100670499Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=f34bd244-a116-486e-967f-a36c65991570 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:19.100864 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:19.100908 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:19.100931 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:19.100982 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(45745cef84b07f726f9805cf032a73d9974059d12332cd8bb6de4be969a90f3b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.035096017Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.035142822Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount has successfully entered the 'dead' state. Jan 23 17:10:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount has successfully entered the 'dead' state. Jan 23 17:10:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e1fc3787\x2d8f6a\x2d466a\x2d8f81\x2d0c6247066a27.mount has successfully entered the 'dead' state. Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.073308019Z" level=info msg="runSandbox: deleting pod ID 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3 from idIndex" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.073336027Z" level=info msg="runSandbox: removing pod sandbox 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.073352473Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.073374039Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.085443347Z" level=info msg="runSandbox: removing pod sandbox from storage: 7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.088819709Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:20.088838810Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1aae681c-cca9-474e-af41-d4167ccb243a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:20.089091 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:20.089133 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:10:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:20.089154 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:10:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:20.089203 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7c92d2bebf9e0642b16fb295afc37d5c22c0f703c45bed6672c1d9ca6ce515c3): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.032910998Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.032947498Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount has successfully entered the 'dead' state. Jan 23 17:10:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount has successfully entered the 'dead' state. Jan 23 17:10:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-eee7a9e6\x2d626c\x2d40fa\x2da1e0\x2d92abdde40ace.mount has successfully entered the 'dead' state. Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.084310249Z" level=info msg="runSandbox: deleting pod ID 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1 from idIndex" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.084336947Z" level=info msg="runSandbox: removing pod sandbox 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.084350260Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.084362830Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.104444838Z" level=info msg="runSandbox: removing pod sandbox from storage: 7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.107755125Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:21.107775061Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=239bfc05-c6d6-43cc-9b20-96532f25f79d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:21.108004 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:21.108052 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:21.108075 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:21.108120 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7efaa6fbb2e2af6720b085b96362fefb7dda18250ef3894551da2a6e9352efe1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:21.996606 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:10:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:21.997108 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.034789342Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.035043883Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.034882459Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.035137572Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d81d36fc\x2d3901\x2d4cd5\x2db16d\x2dac4e1f1a18e3.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-25685d84\x2d3065\x2d40ec\x2d912d\x2d6d39d3021006.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075309015Z" level=info msg="runSandbox: deleting pod ID e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08 from idIndex" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075338199Z" level=info msg="runSandbox: removing pod sandbox e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075353685Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075364783Z" level=info msg="runSandbox: unmounting shmPath for sandbox e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075309503Z" level=info msg="runSandbox: deleting pod ID c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c from idIndex" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075417164Z" level=info msg="runSandbox: removing pod sandbox c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075429886Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.075444726Z" level=info msg="runSandbox: unmounting shmPath for sandbox c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.091437196Z" level=info msg="runSandbox: removing pod sandbox from storage: e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.092429349Z" level=info msg="runSandbox: removing pod sandbox from storage: c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.094684513Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.094702701Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=29dfbc9a-949f-47dd-8793-013f4a75f42d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.094952 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.094995 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.095016 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.095064 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e38972933b84e24808cf331525dfd397b08575871092aa6a6854f59372851c08): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.097721212Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.097738300Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=b44184bb-bf92-4749-a70b-fbbd55530632 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.097915 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.097958 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.097982 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:23.098027 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(c8073f63319059a724ad6c362a95b089d1fd9e9353bc8d63ff24bb51326f7c6c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:10:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:23.995903 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.996276276Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:23.996315971Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:24.011082945Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7454522f-f8ea-438a-9151-696bbae09745 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:24.011109830Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:24.995403 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:24.995792339Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:24.995831466Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:25.006826485Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/5afefb64-98c5-4ef2-a610-acd7337b0e41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:25.006851285Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.032620013Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.032655131Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount has successfully entered the 'dead' state. Jan 23 17:10:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount has successfully entered the 'dead' state. Jan 23 17:10:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-34d305d2\x2dd42c\x2d456b\x2da373\x2d37ae1a992afd.mount has successfully entered the 'dead' state. Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.067281355Z" level=info msg="runSandbox: deleting pod ID 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455 from idIndex" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.067304935Z" level=info msg="runSandbox: removing pod sandbox 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.067320824Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.067332288Z" level=info msg="runSandbox: unmounting shmPath for sandbox 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.083460356Z" level=info msg="runSandbox: removing pod sandbox from storage: 124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.086142008Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.086159100Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f3db1b6b-528a-42bf-b5e7-383aaffe2b25 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:26.086391 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:10:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:26.086435 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:26.086458 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:26.086505 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(124d724b577217e0fa45c5c029305a8a54c2b70b91b96e1a3121d32f32ae7455): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:10:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:26.995866 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.996250504Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:26.996290413Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.007770674Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b945c37f-eb69-4a5f-afb7-3b881c24ea6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.007790615Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886875 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886896 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886906 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886915 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886925 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886933 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.886940 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.894331721Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=5f9547b1-1c39-4fe6-ae12-a3a9b3526c9d name=/runtime.v1.ImageService/ImageStatus Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.894448677Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5f9547b1-1c39-4fe6-ae12-a3a9b3526c9d name=/runtime.v1.ImageService/ImageStatus Jan 23 17:10:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:27.996483 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.996914766Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:27.996967391Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:28.009035057Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ce04a714-6ff7-44da-9d25-4255a0d16025 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:28.009058023Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:28.143286058Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:10:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:28.995650 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:10:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:28.995996363Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:28.996034322Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:29.008061068Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/5c2c5b71-84db-4046-966e-df04336e2b7c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:29.008081573Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:31.996332 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:10:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:31.996822171Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:31.996878452Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:32.010511489Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f4f9b09a-2428-4143-8b81-7d4c501a4f6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:32.010537049Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:33.996034 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:10:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:33.996364205Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:33.996410009Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:33.996807 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:10:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:33.997307 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.007523002Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/0e847250-c815-462f-9e9a-1cfa5e53fff8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.007548000Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:34.995438 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:10:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:34.995664 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:34.995766 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995785675Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995828469Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995901532Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995932717Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995972326Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:34.995997809Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.014361157Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/c0b74a1c-c310-475c-961a-103fd0a5f442 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.014383704Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.015005440Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/840ca858-af99-4618-9937-3b074a262742 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.015022158Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.015151234Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bcd30371-3ebd-455b-a23d-17a4a5f63c64 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:35.015170347Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1289] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1295] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1296] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1298] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1303] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:10:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493838.1308] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:10:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493839.4362] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:10:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:40.995674 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:10:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:40.996066842Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:40.996108281Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:10:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:41.007086821Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/f550ee14-8e88-4e62-8827-3f054aff1e4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:41.007113780Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.384216213Z" level=info msg="NetworkStart: stopping network for sandbox d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.384373313Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/8a7c962c-6d09-40be-9f1e-e0126502f50b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.384400248Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.384407260Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.384414550Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.385048726Z" level=info msg="NetworkStart: stopping network for sandbox 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.385203221Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/799592be-cd55-49d6-8a00-0bd07d1a444c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.385238297Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.385246356Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.385253056Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.390800767Z" level=info msg="NetworkStart: stopping network for sandbox bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.390924952Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/c7f4937e-20fe-4b83-94ee-35ba3006b334 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.390945816Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.390951858Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.390957323Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.395766030Z" level=info msg="NetworkStart: stopping network for sandbox 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.395905182Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/ea61a664-9879-4add-8b61-497dc755e3fa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.395927032Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.395934131Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.395941399Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.396291375Z" level=info msg="NetworkStart: stopping network for sandbox ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.396423262Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/f12b0b52-6172-410d-b0ff-fb856f3e1dde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.396448553Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.396456070Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:42.396462220Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:10:47.997145 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:10:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:10:47.997839 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:10:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:57.021709565Z" level=info msg="NetworkStart: stopping network for sandbox 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:10:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:57.022020058Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/4a937165-55c5-44f4-a8be-8b8dc320abfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:10:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:57.022048008Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:10:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:57.022055988Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:10:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:57.022066545Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:10:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:10:58.142872320Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:11:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:00.996222 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:11:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:00.996743 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:11:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:09.024693220Z" level=info msg="NetworkStart: stopping network for sandbox 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:09.024944274Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/7454522f-f8ea-438a-9151-696bbae09745 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:09.024969119Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:09.024976942Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:09.024983082Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:10.020984615Z" level=info msg="NetworkStart: stopping network for sandbox 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:10.021134438Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/5afefb64-98c5-4ef2-a610-acd7337b0e41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:10.021158974Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:10.021166474Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:10.021174458Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:12.021243998Z" level=info msg="NetworkStart: stopping network for sandbox 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:12.021396687Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b945c37f-eb69-4a5f-afb7-3b881c24ea6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:12.021419870Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:12.021426806Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:12.021433144Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:13.023681222Z" level=info msg="NetworkStart: stopping network for sandbox 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:13.023825401Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/ce04a714-6ff7-44da-9d25-4255a0d16025 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:13.023848219Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:13.023854862Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:13.023861222Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:13.996573 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:11:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:13.997076 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:11:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:14.022751394Z" level=info msg="NetworkStart: stopping network for sandbox 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:14.022899070Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/5c2c5b71-84db-4046-966e-df04336e2b7c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:14.022922319Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:14.022928986Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:14.022936151Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:17.023961754Z" level=info msg="NetworkStart: stopping network for sandbox e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:17.024115259Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f4f9b09a-2428-4143-8b81-7d4c501a4f6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:17.024140686Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:17.024149239Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:17.024155707Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:19.020529717Z" level=info msg="NetworkStart: stopping network for sandbox ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:19.020692379Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/0e847250-c815-462f-9e9a-1cfa5e53fff8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:19.020718558Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:19.020725512Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:19.020731960Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028701401Z" level=info msg="NetworkStart: stopping network for sandbox 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028848124Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/840ca858-af99-4618-9937-3b074a262742 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028870305Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028876632Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028882493Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.028909025Z" level=info msg="NetworkStart: stopping network for sandbox 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029033466Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/c0b74a1c-c310-475c-961a-103fd0a5f442 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029056157Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029063582Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029069612Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029147083Z" level=info msg="NetworkStart: stopping network for sandbox a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029273079Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/bcd30371-3ebd-455b-a23d-17a4a5f63c64 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029299410Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029307336Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:20.029314528Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:26.020289827Z" level=info msg="NetworkStart: stopping network for sandbox 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:26.020442787Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/f550ee14-8e88-4e62-8827-3f054aff1e4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:26.020467960Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:11:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:26.020475134Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:11:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:26.020483514Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:26.997096 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.003025 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.396408643Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.396447369Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.397027769Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.397069726Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.400849469Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.400886493Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.406733912Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.406790029Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.407650731Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.407687358Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount has successfully entered the 'dead' state. Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440337475Z" level=info msg="runSandbox: deleting pod ID d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1 from idIndex" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440366724Z" level=info msg="runSandbox: removing pod sandbox d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440384195Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440340322Z" level=info msg="runSandbox: deleting pod ID 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9 from idIndex" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440415855Z" level=info msg="runSandbox: removing pod sandbox 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440426452Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440431236Z" level=info msg="runSandbox: unmounting shmPath for sandbox d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.440437037Z" level=info msg="runSandbox: unmounting shmPath for sandbox 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.448294307Z" level=info msg="runSandbox: deleting pod ID bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5 from idIndex" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.448317777Z" level=info msg="runSandbox: removing pod sandbox bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.448330029Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.448343648Z" level=info msg="runSandbox: unmounting shmPath for sandbox bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.452310135Z" level=info msg="runSandbox: deleting pod ID ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae from idIndex" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.452337589Z" level=info msg="runSandbox: removing pod sandbox ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.452350178Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.452360495Z" level=info msg="runSandbox: unmounting shmPath for sandbox ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.452454573Z" level=info msg="runSandbox: removing pod sandbox from storage: 30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.455431157Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.455450082Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b2f70ebc-cc44-495c-ac97-6e99ed2e1ff3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.455683 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.455728 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.455753 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.455809 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.456336271Z" level=info msg="runSandbox: deleting pod ID 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd from idIndex" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.456365535Z" level=info msg="runSandbox: removing pod sandbox 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.456379018Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.456389830Z" level=info msg="runSandbox: unmounting shmPath for sandbox 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.456521658Z" level=info msg="runSandbox: removing pod sandbox from storage: d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.459789649Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.459808243Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=485f6aea-042d-4efe-b180-3038896c8aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.459997 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.460027 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.460048 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.460086 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.464446940Z" level=info msg="runSandbox: removing pod sandbox from storage: bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.467724768Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.467742757Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=886e8e53-7ccc-45e0-b259-4c6eae595ecc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.467929 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.467961 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.467983 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.468027 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.468443720Z" level=info msg="runSandbox: removing pod sandbox from storage: ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.471848278Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.471868738Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=c13f4029-587f-4858-90f8-fb4c3fd87d60 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.472038 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.472071 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.472090 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.472126 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.477436859Z" level=info msg="runSandbox: removing pod sandbox from storage: 15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.480667381Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.480686068Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9b0d8379-4c25-4365-a582-7252298c0202 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.480897 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.480940 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.480965 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:27.481013 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.544008 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.544150 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544227039Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544258817Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544360011Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544387828Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.544267 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.544369 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.544408 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544476059Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544509125Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544728351Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544753519Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544771874Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.544757456Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.574476248Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a12e2e6f-5c6d-4675-a799-490852c2c933 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.574499879Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.575457931Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/293eae6a-fbc7-4c6d-9efc-e569ff77508d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.575477433Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.576886183Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ac5b95de-3ba7-444c-a749-3fcdc9fb1495 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.576909658Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.577874354Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6b6e86d0-1f06-4961-a04b-b7c670f9a86d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.577894776Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.578492754Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ce022c57-d12e-4bec-8d6f-886165185d40 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:27.578515209Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888062 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888078 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888086 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888094 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888100 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888107 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:27.888112 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:11:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:28.142625346Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f12b0b52\x2d6172\x2d410d\x2db0ff\x2dfb856f3e1dde.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ea61a664\x2d9879\x2d4add\x2d8b61\x2d497dc755e3fa.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c7f4937e\x2d20fe\x2d4b83\x2d94ee\x2d35ba3006b334.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ddd96f0e5ed5f284da6e8293f39d89e85f0b30f91410b8d0db0a1c241f68a0ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-15b468683b3f763cad7777d1138bc4a928b4d48b283f7c7c5c753f9f56de59cd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-799592be\x2dcd55\x2d49d6\x2d8a00\x2d0bd07d1a444c.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8a7c962c\x2d6d09\x2d40be\x2d9f1e\x2de0126502f50b.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-30b4361fcf1d2a2b021de2fcaedfa371a521fbd7ea2c456aea0c1e07c3c024f9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bc205ee495c15643f1e433fce991bf75b7eddbca5364a3e20d641ee297ee38c5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d40d3ec112f3deddd68d22c59b7b10e11b7919bfcd43010c794a2cb1bfbe66d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:40.996080 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:11:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:40.996610 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.033555683Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.033759702Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount has successfully entered the 'dead' state. Jan 23 17:11:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount has successfully entered the 'dead' state. Jan 23 17:11:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4a937165\x2d55c5\x2d44f4\x2da8be\x2d8b8dc320abfc.mount has successfully entered the 'dead' state. Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.068309617Z" level=info msg="runSandbox: deleting pod ID 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580 from idIndex" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.068338232Z" level=info msg="runSandbox: removing pod sandbox 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.068354575Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.068376597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.079466397Z" level=info msg="runSandbox: removing pod sandbox from storage: 269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.082894146Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:42.082916725Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=dba1f6fd-05ef-49c6-a0c3-0af0f67855f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:42.083133 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:42.083184 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:42.083215 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:42.083266 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(269d4f78191a9dbecbac2547d452d02d3b4ce910ed0e0c56604da26fd41ee580): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:11:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:52.996401 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:52.996908398Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:52.996948721Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:11:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:53.014558081Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/2a100851-04dd-4d38-8322-4f4e34fc2868 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:11:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:53.014585112Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.036229442Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.036475674Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount has successfully entered the 'dead' state. Jan 23 17:11:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount has successfully entered the 'dead' state. Jan 23 17:11:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7454522f\x2df8ea\x2d438a\x2d9151\x2d696bbae09745.mount has successfully entered the 'dead' state. Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.077315789Z" level=info msg="runSandbox: deleting pod ID 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af from idIndex" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.077342032Z" level=info msg="runSandbox: removing pod sandbox 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.077356649Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.077370288Z" level=info msg="runSandbox: unmounting shmPath for sandbox 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.093410353Z" level=info msg="runSandbox: removing pod sandbox from storage: 700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.096415451Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:54.096440632Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d31acf64-1f86-45a1-8b9c-88f2e8563df0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:54.096667 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:54.096716 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:54.096741 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:54.096789 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(700178839e2e94c127dde97879669d24cf3ea1ab156d2bad4cc260c41f2691af): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:11:54.996634 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:11:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:54.997143 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.032218194Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.032260219Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount has successfully entered the 'dead' state. Jan 23 17:11:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount has successfully entered the 'dead' state. Jan 23 17:11:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5afefb64\x2d98c5\x2d4ef2\x2da610\x2dacd7337b0e41.mount has successfully entered the 'dead' state. Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.090311757Z" level=info msg="runSandbox: deleting pod ID 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc from idIndex" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.090337702Z" level=info msg="runSandbox: removing pod sandbox 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.090353267Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.090365557Z" level=info msg="runSandbox: unmounting shmPath for sandbox 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.110440759Z" level=info msg="runSandbox: removing pod sandbox from storage: 182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.113836213Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:55.113855618Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5a911650-b4e8-4913-afc6-550223a6516d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:55.114094 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:55.114132 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:55.114156 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:55.114200 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(182851548da906096c80d6dadb244d8967c1b8f2599c2bafb21f160181a431dc): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.032661274Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.032699141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount has successfully entered the 'dead' state. Jan 23 17:11:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount has successfully entered the 'dead' state. Jan 23 17:11:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b945c37f\x2deb69\x2d4a5f\x2dafb7\x2d3b881c24ea6c.mount has successfully entered the 'dead' state. Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.076279309Z" level=info msg="runSandbox: deleting pod ID 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81 from idIndex" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.076303123Z" level=info msg="runSandbox: removing pod sandbox 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.076316384Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.076329969Z" level=info msg="runSandbox: unmounting shmPath for sandbox 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.092467212Z" level=info msg="runSandbox: removing pod sandbox from storage: 94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.095709039Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:57.095726750Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=623e07ac-9372-40b1-a658-159e6fa4ebbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:57.095961 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:57.096010 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:57.096033 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:11:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:57.096078 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(94495c3fca72233b687b3f786443994ed8cbf62ddbd051b126ba0fed3eb8df81): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.035239518Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.035275617Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount has successfully entered the 'dead' state. Jan 23 17:11:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount has successfully entered the 'dead' state. Jan 23 17:11:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ce04a714\x2d6ff7\x2d44da\x2d9d25\x2d4255a0d16025.mount has successfully entered the 'dead' state. Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.079279920Z" level=info msg="runSandbox: deleting pod ID 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a from idIndex" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.079303683Z" level=info msg="runSandbox: removing pod sandbox 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.079316831Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.079330038Z" level=info msg="runSandbox: unmounting shmPath for sandbox 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.092440069Z" level=info msg="runSandbox: removing pod sandbox from storage: 28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.095746809Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.095765205Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e9f330c2-8df2-41fa-9176-7441f443b8c0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:58.095886 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:58.095931 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:11:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:58.095954 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:11:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:58.096002 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(28201cd98f9a7d19f575e9ab36e3055a4b3f088d6bbed76b50a24d284efdcc0a): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:11:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:58.142637261Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.034413473Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.034444893Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount has successfully entered the 'dead' state. Jan 23 17:11:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount has successfully entered the 'dead' state. Jan 23 17:11:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5c2c5b71\x2d84db\x2d4046\x2d966e\x2ddf04336e2b7c.mount has successfully entered the 'dead' state. Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.078298303Z" level=info msg="runSandbox: deleting pod ID 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b from idIndex" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.078322583Z" level=info msg="runSandbox: removing pod sandbox 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.078335907Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.078348677Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.097414719Z" level=info msg="runSandbox: removing pod sandbox from storage: 5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.100852723Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:11:59.100872316Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=029ce25a-72a6-435b-870a-8f1db44a2bfd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:11:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:59.101088 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:11:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:59.101130 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:11:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:59.101153 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:11:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:11:59.101198 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5354e3cc8e0db892713fa1c17b021e8d82eb620695aa2fa3859e28d6c264dd8b): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.036164446Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.036200592Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount has successfully entered the 'dead' state. Jan 23 17:12:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount has successfully entered the 'dead' state. Jan 23 17:12:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f4f9b09a\x2d2428\x2d4143\x2d8b81\x2d7d4c501a4f6f.mount has successfully entered the 'dead' state. Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.092302977Z" level=info msg="runSandbox: deleting pod ID e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb from idIndex" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.092332338Z" level=info msg="runSandbox: removing pod sandbox e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.092345391Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.092356888Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.108442830Z" level=info msg="runSandbox: removing pod sandbox from storage: e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.111907898Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:02.111925296Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=af0977ea-ce7f-4019-82d0-1f22319a388f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:02.112140 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:02.112306 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:12:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:02.112327 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:12:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:02.112374 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(e5dcea23ccb25769f087beb444c872bfd57f931f948406e8a454c8bd4cb82ffb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.031487914Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.031541287Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount has successfully entered the 'dead' state. Jan 23 17:12:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount has successfully entered the 'dead' state. Jan 23 17:12:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e847250\x2dc815\x2d462f\x2d9e9a\x2d1cfa5e53fff8.mount has successfully entered the 'dead' state. Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.079307374Z" level=info msg="runSandbox: deleting pod ID ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224 from idIndex" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.079335479Z" level=info msg="runSandbox: removing pod sandbox ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.079351198Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.079367603Z" level=info msg="runSandbox: unmounting shmPath for sandbox ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.093443396Z" level=info msg="runSandbox: removing pod sandbox from storage: ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.096906060Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:04.096925295Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5dabb546-8d69-4353-a63b-42dde46d68d7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:04.097156 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:04.097204 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:12:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:04.097235 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:12:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:04.097283 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(ab8d8d212ce25eb51d9a7ead0e9b9c16e9da03c26d808d0affe72fece4715224): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.039567140Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.039608168Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.040223291Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.040263892Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.040252513Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.040356281Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bcd30371\x2d3ebd\x2d455b\x2da23d\x2d17a4a5f63c64.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-840ca858\x2daf99\x2d4618\x2d9937\x2d3b074a262742.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c0b74a1c\x2dc310\x2d475c\x2d961a\x2d103fd0a5f442.mount has successfully entered the 'dead' state. Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085313102Z" level=info msg="runSandbox: deleting pod ID 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6 from idIndex" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085339010Z" level=info msg="runSandbox: removing pod sandbox 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085352251Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085366169Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085316498Z" level=info msg="runSandbox: deleting pod ID a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690 from idIndex" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085427340Z" level=info msg="runSandbox: removing pod sandbox a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085439797Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.085452705Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.086273762Z" level=info msg="runSandbox: deleting pod ID 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b from idIndex" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.086304997Z" level=info msg="runSandbox: removing pod sandbox 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.086318462Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.086331643Z" level=info msg="runSandbox: unmounting shmPath for sandbox 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.101447810Z" level=info msg="runSandbox: removing pod sandbox from storage: a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.101449962Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.102469428Z" level=info msg="runSandbox: removing pod sandbox from storage: 800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.104912244Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.104929817Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=55870823-0270-4269-bcbb-b0e8e8f49def name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.105225 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.105267 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.105289 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.105333 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.107916282Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.107934604Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=9169d0c3-5904-4589-9c9b-f6fe68b6da1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.108148 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.108181 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.108201 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.108240 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.114358392Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.114381651Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=65fef74f-f638-4169-8e6c-83d526e980bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.114613 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.114647 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.114678 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:05.114717 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:12:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:05.995696 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.996003424Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:05.996039201Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:06.006680252Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/19ecc5fa-26b7-4b12-9e21-65329e96e52b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:06.006699686Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a7ab430ae2559bc4ca0c6ae44bd36ffbf9a91f89a462106ef53513d715a56690-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2a16472cc3164f389b3a48c6965c983c7dfdce9d69e658da7cfbff6b9deb04a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-800094f2d8a772be87fbb4e8b1d48c13efa70f80ea247faa1a8a83e77d95c14b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:06.995851 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:12:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:06.996177776Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:06.996218652Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:07.007436081Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/dc988dbf-f729-4677-89b5-76d24301fd21 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:07.007457070Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:07.999808 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:08.001350 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1268] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1273] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1274] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1502] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1504] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1515] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1518] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1518] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1520] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1523] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:12:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493928.1528] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:12:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:09.995922 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:12:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:09.996255310Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:09.996507671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.007695251Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/3ec8d0e6-845b-4476-966a-980987c0ab31 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.007721322Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674493930.1454] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:12:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:10.996398 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:12:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:10.996597 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.996714407Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.996751548Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.996841387Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:10.996876622Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.011912545Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/f7f4241d-ff37-4dad-b0ce-e0e281cfe7d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.011934323Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.013511189Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4efd09dc-a0f4-4723-bce4-79ac0003ae6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.013532975Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.032051597Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.032082556Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount has successfully entered the 'dead' state. Jan 23 17:12:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount has successfully entered the 'dead' state. Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.072344811Z" level=info msg="runSandbox: deleting pod ID 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d from idIndex" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.072371859Z" level=info msg="runSandbox: removing pod sandbox 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.072387546Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.072404229Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.092462486Z" level=info msg="runSandbox: removing pod sandbox from storage: 3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.095331164Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:11.095350779Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a9d493ab-eb2b-49ad-8e79-2e01d1ee1aa7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:11.095617 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:11.095665 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:11.095688 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:11.095741 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:12:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f550ee14\x2d8e88\x2d4e62\x2d8827\x2d3f054aff1e4e.mount has successfully entered the 'dead' state. Jan 23 17:12:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3fb9e290d1f29631790a47af669951cf69429a77e7b7da6441df780ee7e6d92d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589502623Z" level=info msg="NetworkStart: stopping network for sandbox b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589529083Z" level=info msg="NetworkStart: stopping network for sandbox bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589516332Z" level=info msg="NetworkStart: stopping network for sandbox 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589639150Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a12e2e6f-5c6d-4675-a799-490852c2c933 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589652277Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/ac5b95de-3ba7-444c-a749-3fcdc9fb1495 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589663360Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589669911Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589673399Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589682166Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589689389Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589727604Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/293eae6a-fbc7-4c6d-9efc-e569ff77508d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589676358Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589752541Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589760531Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.589767193Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.590863770Z" level=info msg="NetworkStart: stopping network for sandbox 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.590981573Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/6b6e86d0-1f06-4961-a04b-b7c670f9a86d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591005484Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591012069Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591041906Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591184884Z" level=info msg="NetworkStart: stopping network for sandbox 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591311580Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ce022c57-d12e-4bec-8d6f-886165185d40 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591331234Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591337392Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:12.591343431Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:14.996189 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:12:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:14.996558996Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:14.996597217Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:15.008428998Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/a131c8d2-a9a5-47ba-aaba-0d8c9c9e1c90 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:15.008448340Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:16.995965 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:12:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:16.996164 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:12:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:16.996422606Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:16.996468679Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:16.996495024Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:16.996524612Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.014811407Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9ab55d08-e44e-46f2-9b4b-5680fdd6c9d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.014836875Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.016483662Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/1766909c-b9d8-4e33-a74e-5e0a5304911e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.016503775Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:17.996309 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.996638813Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:17.996676047Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:18.007738130Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/4561a1e3-e2ee-460a-b0e5-b3b64fe7b4fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:18.007757237Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:18.996393 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:18.996811242Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:18.996853864Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:18.997256 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:18.997754 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:12:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:19.008723984Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/6c8ae118-d142-438d-aa4c-06d79dc817ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:19.008745363Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:24.996241 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:12:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:24.996553089Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:24.996610484Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:25.008778022Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/14b5cd0c-d7bc-47e9-abbf-9822938b7954 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:25.008800602Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889154 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889174 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889181 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889187 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889193 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889200 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:27.889212 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:12:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:28.142872889Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:12:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:30.000340 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:30.001374 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.029303121Z" level=info msg="NetworkStart: stopping network for sandbox f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.029486405Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/2a100851-04dd-4d38-8322-4f4e34fc2868 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.029511083Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.029517796Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.029524420Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.577770 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-rd6x4] Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.577811 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.584721 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-rd6x4] Jan 23 17:12:38 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice. -- Subject: Unit kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.710463 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.710495 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.710513 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprzb\" (UniqueName: \"kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.710529 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811351 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811382 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811401 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-rprzb\" (UniqueName: \"kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811418 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811504 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811607 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.811739 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.825716 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-rprzb\" (UniqueName: \"kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb\") pod \"cni-sysctl-allowlist-ds-rd6x4\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:38.893938 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.894422910Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-rd6x4/POD" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.894469400Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.905047203Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-rd6x4 Namespace:openshift-multus ID:69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924 UID:0530de83-1dba-45d0-a4ff-dd81dd6c3f9b NetNS:/var/run/netns/eebda105-35a8-4677-8570-2ef42643e495 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:38.905067487Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-rd6x4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:41.996825 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:41.997354 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:12:48 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00100|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 17:12:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:51.019604146Z" level=info msg="NetworkStart: stopping network for sandbox 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:51.019751480Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/19ecc5fa-26b7-4b12-9e21-65329e96e52b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:51.019773955Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:51.019780966Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:51.019787461Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:52.021275393Z" level=info msg="NetworkStart: stopping network for sandbox 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:52.021423265Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/dc988dbf-f729-4677-89b5-76d24301fd21 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:52.021446893Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:52.021454343Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:52.021460859Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.021585309Z" level=info msg="NetworkStart: stopping network for sandbox 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.021750033Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/3ec8d0e6-845b-4476-966a-980987c0ab31 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.021776951Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.021784302Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.021791921Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:55.996787 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.997581883Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=882113fb-9454-4aac-aa9a-b5ebad7954fb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.997719153Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=882113fb-9454-4aac-aa9a-b5ebad7954fb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.998328456Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=af392565-564b-4de1-bea3-c59bacc15081 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.998425400Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=af392565-564b-4de1-bea3-c59bacc15081 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.999208003Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8ac60c49-b6c1-4538-8df6-5d26249095fa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:12:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:55.999283802Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:56 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope. -- Subject: Unit crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.025249776Z" level=info msg="NetworkStart: stopping network for sandbox ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.025375963Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/f7f4241d-ff37-4dad-b0ce-e0e281cfe7d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.025395907Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.025402604Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.025409107Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.027058680Z" level=info msg="NetworkStart: stopping network for sandbox 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.027161642Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4efd09dc-a0f4-4723-bce4-79ac0003ae6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.027179976Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.027186289Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.027193047Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:56 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79. -- Subject: Unit crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.125919427Z" level=info msg="Created container c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=8ac60c49-b6c1-4538-8df6-5d26249095fa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.126297564Z" level=info msg="Starting container: c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" id=b743640d-445b-466e-9977-4a3774009d21 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.145560821Z" level=info msg="Started container" PID=118000 containerID=c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=b743640d-445b-466e-9977-4a3774009d21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.150665849Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.161826746Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.161848579Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.161858985Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.171786312Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.171804493Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.171816058Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.182008682Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.182029074Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.182041657Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.190464227Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:12:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:56.190480603Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:12:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:56.711099 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/189.log" Jan 23 17:12:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:56.712109 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79} Jan 23 17:12:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:56.712390 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.600495943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.600688464Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.600888905Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.600922425Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.601125853Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.601162258Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.601133219Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.601277678Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.602074208Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.602105184Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab conmon[117968]: conmon c61028a2d66e7eda85e8 : container 118000 exited with status 1 Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope: Consumed 574ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope completed and consumed the indicated resources. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope has successfully entered the 'dead' state. Jan 23 17:12:57 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope: Consumed 54ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79.scope completed and consumed the indicated resources. Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661324886Z" level=info msg="runSandbox: deleting pod ID bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d from idIndex" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661339107Z" level=info msg="runSandbox: deleting pod ID b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c from idIndex" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661386068Z" level=info msg="runSandbox: removing pod sandbox b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661400160Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661413133Z" level=info msg="runSandbox: unmounting shmPath for sandbox b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661429724Z" level=info msg="runSandbox: removing pod sandbox bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661473342Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.661491935Z" level=info msg="runSandbox: unmounting shmPath for sandbox bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674286993Z" level=info msg="runSandbox: deleting pod ID 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87 from idIndex" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674329642Z" level=info msg="runSandbox: removing pod sandbox 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674344896Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674374008Z" level=info msg="runSandbox: unmounting shmPath for sandbox 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674291742Z" level=info msg="runSandbox: deleting pod ID 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052 from idIndex" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674409526Z" level=info msg="runSandbox: removing pod sandbox 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674426045Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.674442023Z" level=info msg="runSandbox: unmounting shmPath for sandbox 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.675289215Z" level=info msg="runSandbox: deleting pod ID 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e from idIndex" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.675318461Z" level=info msg="runSandbox: removing pod sandbox 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.675331421Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.675345792Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.675625424Z" level=info msg="runSandbox: removing pod sandbox from storage: b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.678460396Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.678480806Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=006319e6-1805-4eb8-8138-88b04c1e7e95 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.678834 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.678893 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.678923 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.678979 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.680511924Z" level=info msg="runSandbox: removing pod sandbox from storage: bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.684359829Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.684379679Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=6d6cb7e1-b940-426b-acf3-e7acb5b6c8f1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.684525 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.684561 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.684587 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.684637 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.687524242Z" level=info msg="runSandbox: removing pod sandbox from storage: 49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.690687424Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.690705948Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=bcf0c1ef-48d8-4816-b1c2-88cbd17f8efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.690812 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.690846 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.690867 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.690911 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.691494913Z" level=info msg="runSandbox: removing pod sandbox from storage: 2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.691574772Z" level=info msg="runSandbox: removing pod sandbox from storage: 08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.694695511Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.694712759Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=52725f05-0de7-4508-a5c2-ea34e1df69e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.694827 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.694852 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.694874 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.694911 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.697643667Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.697660801Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=88e2e60d-31c2-413a-8c66-ab7086a66873 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.697863 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.697910 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.697937 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.697983 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.715198 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/190.log" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.715795 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/189.log" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.716918 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" exitCode=1 Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.716990 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79} Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717013 8631 scope.go:115] "RemoveContainer" containerID="b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717279 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717407 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717456 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717539 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717557900Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717590747Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.717602 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717715429Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717737809Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717766654Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717777025Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717791066Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.717743659Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:57.718290 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:12:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:57.718770 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.720590075Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.720611969Z" level=info msg="Removing container: b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f" id=c7d3ee59-73d3-43bb-be97-88609f56df69 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.722423099Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.758388507Z" level=info msg="Removed container b1ca34f564a47d70be58f05068329564b357e84788a01fa5e896a210335c287f: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=c7d3ee59-73d3-43bb-be97-88609f56df69 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.766317980Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/7ad6e076-4ad1-4964-9a6b-e4f8785f62f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.766339305Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.767121749Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3f069d56-cd06-4b69-be48-d3dae7772cea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.767145032Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.768164833Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/5a8367da-84e4-4a8c-9c42-bd6c60b12557 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.768183616Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.770020841Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a97a3683-07e5-4b75-90a0-436bd185694d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.770042777Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.771001384Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5ed57d29-6761-4c42-808e-5da1332236d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:12:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:57.771022060Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:12:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:12:58.143940520Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ce022c57\x2dd12e\x2d4bec\x2d8d6f\x2d886165185d40.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6b6e86d0\x2d1f06\x2d4961\x2da04b\x2db7c670f9a86d.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ac5b95de\x2d3ba7\x2d444c\x2da749\x2d3fcdc9fb1495.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-293eae6a\x2dfbc7\x2d4c6d\x2d9efc\x2de569ff77508d.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a12e2e6f\x2d5c6d\x2d4675\x2da799\x2d490852c2c933.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2acf702c11f4409ed4362a62615a8478988b3c224ff822f5d9827324e286316e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-08613a41ee482a761932ef7034391439f96f00a675c375ef2242b088def2bc87-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-49b4718bcbb423b8df0cdc28969e39255f9d00ab43203efcdbfb387d66c8e052-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bff8da373ead71a57f7a8c194981b5d69cd5c202753b1a6525125244fa908c7d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b3e29935897f61a42d75a30d23867eb7b9c43be1dc1f2fc733864781b23d5f0c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-bc60a4ebb38fce823e446c6afb5af44cccecdb2b6cbd7d477045e8cc1ac75ba4-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bc60a4ebb38fce823e446c6afb5af44cccecdb2b6cbd7d477045e8cc1ac75ba4-merged.mount has successfully entered the 'dead' state. Jan 23 17:12:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:58.720079 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/190.log" Jan 23 17:12:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:12:58.722202 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:12:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:12:58.722710 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:13:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:00.021626975Z" level=info msg="NetworkStart: stopping network for sandbox 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:00.021809586Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/a131c8d2-a9a5-47ba-aaba-0d8c9c9e1c90 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:00.021832209Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:00.021839100Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:00.021845412Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.029610840Z" level=info msg="NetworkStart: stopping network for sandbox f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.029751259Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9ab55d08-e44e-46f2-9b4b-5680fdd6c9d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.029773499Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.029779744Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.029786726Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.030002848Z" level=info msg="NetworkStart: stopping network for sandbox 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.030119760Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/1766909c-b9d8-4e33-a74e-5e0a5304911e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.030142442Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.030148929Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:02.030155055Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:03.020512195Z" level=info msg="NetworkStart: stopping network for sandbox b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:03.020652180Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/4561a1e3-e2ee-460a-b0e5-b3b64fe7b4fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:03.020680193Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:03.020687100Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:03.020693673Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:04.022832804Z" level=info msg="NetworkStart: stopping network for sandbox 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:04.022970753Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/6c8ae118-d142-438d-aa4c-06d79dc817ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:04.022993814Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:04.023001084Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:04.023007229Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:10.022323607Z" level=info msg="NetworkStart: stopping network for sandbox 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:10.022457960Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/14b5cd0c-d7bc-47e9-abbf-9822938b7954 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:10.022481393Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:10.022488374Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:10.022494716Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:12.996076 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:13:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:12.996575 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.040100494Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.040144018Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount has successfully entered the 'dead' state. Jan 23 17:13:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount has successfully entered the 'dead' state. Jan 23 17:13:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a100851\x2d04dd\x2d4d38\x2d8322\x2d4f4e34fc2868.mount has successfully entered the 'dead' state. Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.081354381Z" level=info msg="runSandbox: deleting pod ID f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef from idIndex" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.081380133Z" level=info msg="runSandbox: removing pod sandbox f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.081394492Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.081409894Z" level=info msg="runSandbox: unmounting shmPath for sandbox f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.101503208Z" level=info msg="runSandbox: removing pod sandbox from storage: f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.104795902Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.104816827Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=45d71695-7119-49d3-91fa-4f4c12cc5fa6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:23.105045 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:23.105212 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:23.105239 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:23.105296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f9b5fd285dff654d8830d2fa41b10d1d2774aa6bffce2a3af81a4ffeaf4467ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.919041421Z" level=info msg="NetworkStart: stopping network for sandbox 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.919198921Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-rd6x4 Namespace:openshift-multus ID:69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924 UID:0530de83-1dba-45d0-a4ff-dd81dd6c3f9b NetNS:/var/run/netns/eebda105-35a8-4677-8570-2ef42643e495 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.919227673Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.919234348Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:23.919240611Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-rd6x4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:25.996486 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:13:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:25.996990 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889627 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889649 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889656 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889663 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889668 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889676 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:27.889682 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:13:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:28.143384059Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:13:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:34.995788 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:34.996098312Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:34.996351000Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:35.009381304Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/550d237f-898d-4814-bedc-c9b477f54090 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:35.009401125Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.031786542Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.031824803Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount has successfully entered the 'dead' state. Jan 23 17:13:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount has successfully entered the 'dead' state. Jan 23 17:13:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-19ecc5fa\x2d26b7\x2d4b12\x2d9e21\x2d65329e96e52b.mount has successfully entered the 'dead' state. Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.082308026Z" level=info msg="runSandbox: deleting pod ID 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f from idIndex" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.082339325Z" level=info msg="runSandbox: removing pod sandbox 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.082360441Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.082383862Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.098454199Z" level=info msg="runSandbox: removing pod sandbox from storage: 87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.104515342Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:36.104546488Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ab610998-7797-4f26-a38e-0a2f31eb7747 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:36.104806 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:36.104858 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:36.104889 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:36.104949 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(87f5d36c4712b7e4f31906a73d2ace9cd2ab9352473a803f30a3a1cbfb65757f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.033191840Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.033234266Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount has successfully entered the 'dead' state. Jan 23 17:13:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount has successfully entered the 'dead' state. Jan 23 17:13:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dc988dbf\x2df729\x2d4677\x2d89b5\x2d76d24301fd21.mount has successfully entered the 'dead' state. Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.077300695Z" level=info msg="runSandbox: deleting pod ID 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679 from idIndex" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.077323043Z" level=info msg="runSandbox: removing pod sandbox 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.077336363Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.077347717Z" level=info msg="runSandbox: unmounting shmPath for sandbox 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.089447595Z" level=info msg="runSandbox: removing pod sandbox from storage: 966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.092948283Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:37.092965708Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c22d8e14-2633-4ec5-8665-562f8f07b67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:37.093247 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:37.093293 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:13:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:37.093314 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:13:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:37.093357 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(966a04c94c899404759fdfe8a2b4a78c7bfbf94d208072d556ea5f2ac5c38679): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494018.1196] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494018.1201] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494018.1202] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494018.1378] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:13:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494018.1379] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:13:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:38.608367 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-rd6x4] Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.032767830Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.032805549Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount has successfully entered the 'dead' state. Jan 23 17:13:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount has successfully entered the 'dead' state. Jan 23 17:13:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ec8d0e6\x2d845b\x2d4476\x2d966a\x2d980987c0ab31.mount has successfully entered the 'dead' state. Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.082310432Z" level=info msg="runSandbox: deleting pod ID 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53 from idIndex" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.082334760Z" level=info msg="runSandbox: removing pod sandbox 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.082347602Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.082362614Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.094485629Z" level=info msg="runSandbox: removing pod sandbox from storage: 2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.098047470Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:40.098065080Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=50cf4fa9-63c4-401b-b0a2-6f52a42f2f3f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:40.098313 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:40.098360 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:40.098383 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:40.098434 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2959276afbfdb329dfd6f0c367a2c26fcb61f8fdcb1382062e3556fcafeeac53): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:40.996427 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:13:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:40.996945 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.036280995Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.036318919Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.036610137Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.036639033Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount has successfully entered the 'dead' state. Jan 23 17:13:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount has successfully entered the 'dead' state. Jan 23 17:13:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount has successfully entered the 'dead' state. Jan 23 17:13:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount has successfully entered the 'dead' state. Jan 23 17:13:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f7f4241d\x2dff37\x2d4dad\x2db0ce\x2de0e281cfe7d9.mount has successfully entered the 'dead' state. Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.072359495Z" level=info msg="runSandbox: deleting pod ID ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df from idIndex" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.072382748Z" level=info msg="runSandbox: removing pod sandbox ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.072396205Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.072407228Z" level=info msg="runSandbox: unmounting shmPath for sandbox ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.079334729Z" level=info msg="runSandbox: deleting pod ID 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35 from idIndex" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.079358988Z" level=info msg="runSandbox: removing pod sandbox 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.079370893Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.079381386Z" level=info msg="runSandbox: unmounting shmPath for sandbox 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.093474174Z" level=info msg="runSandbox: removing pod sandbox from storage: ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.096864405Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.096881214Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=d8576744-d977-4237-be41-f9c495221e0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.097109 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.097151 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.097173 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.097229 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.103487415Z" level=info msg="runSandbox: removing pod sandbox from storage: 994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.106718534Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:41.106737536Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5d894dd3-2b9a-4251-a954-3eeeed92a9a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.106916 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.106953 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.106975 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:13:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:41.107016 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:13:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4efd09dc\x2da0f4\x2d4723\x2dbce4\x2d79ac0003ae6c.mount has successfully entered the 'dead' state. Jan 23 17:13:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-994df20a68ff3a95b4a34084c63a5ee6beb2c4bbe1eb0d33450399dd7e033c35-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ebc7ccf431971be2c794494b65063d1d74306aeb5e647e01f401029a2f9dc7df-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.780204086Z" level=info msg="NetworkStart: stopping network for sandbox 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.780353686Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/7ad6e076-4ad1-4964-9a6b-e4f8785f62f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.780376813Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.780383410Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.780389275Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782020098Z" level=info msg="NetworkStart: stopping network for sandbox 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782130617Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/5a8367da-84e4-4a8c-9c42-bd6c60b12557 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782154022Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782161866Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782168972Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782856328Z" level=info msg="NetworkStart: stopping network for sandbox d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.782977193Z" level=info msg="NetworkStart: stopping network for sandbox 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783010062Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3f069d56-cd06-4b69-be48-d3dae7772cea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783040333Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783048541Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783055795Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783082522Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5ed57d29-6761-4c42-808e-5da1332236d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783104844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783112201Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783118492Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783402502Z" level=info msg="NetworkStart: stopping network for sandbox 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783523357Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a97a3683-07e5-4b75-90a0-436bd185694d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783549486Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783558624Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:13:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:42.783565016Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.032746690Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.032785966Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount has successfully entered the 'dead' state. Jan 23 17:13:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount has successfully entered the 'dead' state. Jan 23 17:13:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a131c8d2\x2da9a5\x2d47ba\x2daaba\x2d0d8c9c9e1c90.mount has successfully entered the 'dead' state. Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.072307120Z" level=info msg="runSandbox: deleting pod ID 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218 from idIndex" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.072335818Z" level=info msg="runSandbox: removing pod sandbox 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.072349832Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.072362861Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.084406607Z" level=info msg="runSandbox: removing pod sandbox from storage: 35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.087964407Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:45.087983093Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=03381ee0-ed2b-4248-991c-ed184cc30ee8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:45.088236 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:45.088281 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:13:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:45.088304 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:13:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:45.088356 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(35426a7afa01145d58aa75c07274473fd15a0118e747de67fa5b3a823afe6218): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.040159019Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.040203115Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.040182201Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.040261574Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1766909c\x2db9d8\x2d4e33\x2da74e\x2d5e0a5304911e.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9ab55d08\x2de44e\x2d46f2\x2d9b4b\x2d5680fdd6c9d1.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086322556Z" level=info msg="runSandbox: deleting pod ID 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5 from idIndex" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086349667Z" level=info msg="runSandbox: removing pod sandbox 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086362814Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086378248Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086323243Z" level=info msg="runSandbox: deleting pod ID f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae from idIndex" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086433264Z" level=info msg="runSandbox: removing pod sandbox f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086446593Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.086458550Z" level=info msg="runSandbox: unmounting shmPath for sandbox f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.102436245Z" level=info msg="runSandbox: removing pod sandbox from storage: f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.103412785Z" level=info msg="runSandbox: removing pod sandbox from storage: 1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.105910289Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.105928423Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9a8689a3-9537-4f9b-b973-a3fe6036ffb7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.106175 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.106225 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.106248 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.106292 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f4eca90422fa653b40a91618de75e9f1204860f1578cda623327a98fe16c10ae): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.108857819Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:47.108874310Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=06322217-f01a-48ab-ab20-eb9ce14b9e48 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.109084 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.109130 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.109152 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:13:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:47.109195 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(1ac4899138a4ca025c56e39dd621eadc4445e728de45a48954eab0ee84efebf5): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.032687759Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.032718824Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount has successfully entered the 'dead' state. Jan 23 17:13:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount has successfully entered the 'dead' state. Jan 23 17:13:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4561a1e3\x2de2ee\x2d460a\x2db0e5\x2db3b64fe7b4fe.mount has successfully entered the 'dead' state. Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.073312632Z" level=info msg="runSandbox: deleting pod ID b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406 from idIndex" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.073337359Z" level=info msg="runSandbox: removing pod sandbox b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.073350019Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.073361258Z" level=info msg="runSandbox: unmounting shmPath for sandbox b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.086490743Z" level=info msg="runSandbox: removing pod sandbox from storage: b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.089721095Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:48.089739682Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=a536551c-f234-4961-bf39-0f07a7b5b14b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:48.089941 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:48.089985 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:48.090008 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:48.090052 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9c633cd9bc9e7fd103123e2d7559aa640d26fa99ee3ee98979429e55d9aa406): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.034344793Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.034375893Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount has successfully entered the 'dead' state. Jan 23 17:13:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount has successfully entered the 'dead' state. Jan 23 17:13:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6c8ae118\x2dd142\x2d438d\x2daa4c\x2d06d79dc817ae.mount has successfully entered the 'dead' state. Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.072279146Z" level=info msg="runSandbox: deleting pod ID 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1 from idIndex" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.072326444Z" level=info msg="runSandbox: removing pod sandbox 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.072339340Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.072355716Z" level=info msg="runSandbox: unmounting shmPath for sandbox 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.088409803Z" level=info msg="runSandbox: removing pod sandbox from storage: 32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.091621558Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:49.091639035Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=694bf8ce-892a-4f9b-8b25-df35530e64c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:49.091793 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:49.091858 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:13:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:49.091879 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:13:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:49.091929 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(32ea1ee428c37ed2c1ad907145abea919f89c6aff3e64078b5f39288efd7cde1): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:13:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:50.996221 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:50.996579259Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:50.996619411Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.008769172Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7ebbcdc0-6afe-462d-893c-cc5f873bccca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.008789095Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:51.996214 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:13:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:51.996307 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.996959571Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.996998514Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.997053309Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:51.997082483Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:52.015133563Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/df7b5b0b-4300-4509-8985-ad3da3c23e4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:52.015162038Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:52.015788346Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/45614214-3cdf-42aa-b1d5-41de8f253651 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:52.015814324Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:54.995460 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:13:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:54.995924410Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:54.995970600Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.008070301Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/71e76ddb-5f15-4750-8845-584f19b5a631 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.008090230Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.034031986Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.034061703Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount has successfully entered the 'dead' state. Jan 23 17:13:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount has successfully entered the 'dead' state. Jan 23 17:13:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-14b5cd0c\x2dd7bc\x2d47e9\x2dabbf\x2d9822938b7954.mount has successfully entered the 'dead' state. Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.076300927Z" level=info msg="runSandbox: deleting pod ID 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31 from idIndex" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.076323887Z" level=info msg="runSandbox: removing pod sandbox 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.076336853Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.076349613Z" level=info msg="runSandbox: unmounting shmPath for sandbox 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.088414707Z" level=info msg="runSandbox: removing pod sandbox from storage: 98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.091299617Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.091317073Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=fea0995c-6ef0-4b0f-b845-c6e080e3d5ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:55.091503 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:55.091544 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:55.091568 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:55.091616 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:55.996523 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.996840551Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:55.996884189Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:55.997216 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:13:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:13:55.997716 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:13:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-98d88886e4176d20ec4b782af20e44563c2e2e78b6a68b9e283834fa0593bf31-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:13:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:56.009489992Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/0bf01a7f-2e0b-411c-aa70-efe1f3031237 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:56.009509358Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:57.996705 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:13:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:13:57.996853 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:13:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:57.997082942Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:57.997108996Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:13:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:57.997144604Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:57.997121941Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:13:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:58.012168801Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5a8143b7-fd73-4019-9ec3-f2ab19398aad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:58.012191044Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:58.012969319Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/c75ac5f5-60cc-49b4-8a96-a4c80cc25b0c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:13:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:58.012987810Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:13:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:13:58.143660364Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:14:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:00.995659 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:14:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:00.995773 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:14:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:00.995972463Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:00.996204222Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:00.996068029Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:00.996391084Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:01.014338953Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/3fd5da66-1829-48ae-99ea-59c2077e5496 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:01.014370574Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:01.014768243Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/8a1f00e4-a1aa-4806-ba24-0617b0e44cb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:01.014794295Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:02.996093 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:14:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:02.996485555Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:02.996536854Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:03.008689554Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/a334a83c-5c29-4a8f-b778-7e96c83f6367 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:03.008715169Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:06.996428 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:14:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:06.996763293Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:06.996805800Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:07.009134636Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dee9c5fb-fd80-4ba7-9b2a-67f86cb5eb23 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:07.009156526Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.930508550Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-rd6x4_openshift-multus_0530de83-1dba-45d0-a4ff-dd81dd6c3f9b_0(69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924): error removing pod openshift-multus_cni-sysctl-allowlist-ds-rd6x4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-rd6x4/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.930544349Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount has successfully entered the 'dead' state. Jan 23 17:14:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount has successfully entered the 'dead' state. Jan 23 17:14:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-eebda105\x2d35a8\x2d4677\x2d8570\x2d2ef42643e495.mount has successfully entered the 'dead' state. Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.968313034Z" level=info msg="runSandbox: deleting pod ID 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924 from idIndex" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.968340324Z" level=info msg="runSandbox: removing pod sandbox 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.968354433Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.968367654Z" level=info msg="runSandbox: unmounting shmPath for sandbox 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.988429293Z" level=info msg="runSandbox: removing pod sandbox from storage: 69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.991648311Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-rd6x4_openshift-multus_0530de83-1dba-45d0-a4ff-dd81dd6c3f9b_0" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:08.991667460Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-rd6x4_openshift-multus_0530de83-1dba-45d0-a4ff-dd81dd6c3f9b_0" id=d7e48933-e25c-4796-9870-86dd952f5deb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:08.991900 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-rd6x4_openshift-multus_0530de83-1dba-45d0-a4ff-dd81dd6c3f9b_0(69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924): error adding pod openshift-multus_cni-sysctl-allowlist-ds-rd6x4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-rd6x4/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:08.992061 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-rd6x4_openshift-multus_0530de83-1dba-45d0-a4ff-dd81dd6c3f9b_0(69e3dbdf49dd9716ce729b6c2d5928cfa87214f473503def84c4da74c3c9c924): error adding pod openshift-multus_cni-sysctl-allowlist-ds-rd6x4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-rd6x4/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-rd6x4" Jan 23 17:14:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:08.996489 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:14:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:08.997028 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927458 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rprzb\" (UniqueName: \"kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb\") pod \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927496 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist\") pod \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927519 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready\") pod \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927535 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir\") pod \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\" (UID: \"0530de83-1dba-45d0-a4ff-dd81dd6c3f9b\") " Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927609 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b" (UID: "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:14:09.927663 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:14:09.927703 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927727 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready" (OuterVolumeSpecName: "ready") pod "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b" (UID: "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.927796 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b" (UID: "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:09 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-0530de83\x2d1dba\x2d45d0\x2da4ff\x2ddd81dd6c3f9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drprzb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-0530de83\x2d1dba\x2d45d0\x2da4ff\x2ddd81dd6c3f9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drprzb.mount has successfully entered the 'dead' state. Jan 23 17:14:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:09.942719 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb" (OuterVolumeSpecName: "kube-api-access-rprzb") pod "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b" (UID: "0530de83-1dba-45d0-a4ff-dd81dd6c3f9b"). InnerVolumeSpecName "kube-api-access-rprzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:14:10 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice. -- Subject: Unit kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice has finished shutting down. Jan 23 17:14:10 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-pod0530de83_1dba_45d0_a4ff_dd81dd6c3f9b.slice completed and consumed the indicated resources. Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.028240 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.028262 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.028271 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-rprzb\" (UniqueName: \"kubernetes.io/projected/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-kube-api-access-rprzb\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.028280 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.856600 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-rd6x4] Jan 23 17:14:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:10.859881 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-rd6x4] Jan 23 17:14:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:11.998506 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0530de83-1dba-45d0-a4ff-dd81dd6c3f9b path="/var/lib/kubelet/pods/0530de83-1dba-45d0-a4ff-dd81dd6c3f9b/volumes" Jan 23 17:14:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:20.022609105Z" level=info msg="NetworkStart: stopping network for sandbox 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:20.022982145Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/550d237f-898d-4814-bedc-c9b477f54090 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:20.023007496Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:20.023014109Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:20.023020534Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:20 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00101|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 17:14:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:21.996430 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:14:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:21.996948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.792728396Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.792765179Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.792737019Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.792830370Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.793746385Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.793792887Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.795484266Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.795518743Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.795548174Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.795521136Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5ed57d29\x2d6761\x2d4c42\x2d808e\x2d5da1332236d1.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a97a3683\x2d07e5\x2d4b75\x2d90a0\x2d436bd185694d.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a8367da\x2d84e4\x2d4a8c\x2d9c42\x2dbd6c60b12557.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3f069d56\x2dcd06\x2d4b69\x2dbe48\x2dd3dae7772cea.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7ad6e076\x2d4ad1\x2d4964\x2d9a6b\x2de4f8785f62f6.mount has successfully entered the 'dead' state. Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838347998Z" level=info msg="runSandbox: deleting pod ID 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610 from idIndex" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838364370Z" level=info msg="runSandbox: deleting pod ID d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe from idIndex" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838393262Z" level=info msg="runSandbox: removing pod sandbox d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838368161Z" level=info msg="runSandbox: deleting pod ID 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0 from idIndex" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838433775Z" level=info msg="runSandbox: removing pod sandbox 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838359429Z" level=info msg="runSandbox: deleting pod ID 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144 from idIndex" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838473637Z" level=info msg="runSandbox: removing pod sandbox 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838432994Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838513263Z" level=info msg="runSandbox: unmounting shmPath for sandbox d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838419210Z" level=info msg="runSandbox: deleting pod ID 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be from idIndex" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838593806Z" level=info msg="runSandbox: removing pod sandbox 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838604970Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838621896Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838448254Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838661653Z" level=info msg="runSandbox: unmounting shmPath for sandbox 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838373998Z" level=info msg="runSandbox: removing pod sandbox 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838751970Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838767375Z" level=info msg="runSandbox: unmounting shmPath for sandbox 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838487531Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.838845185Z" level=info msg="runSandbox: unmounting shmPath for sandbox 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.847404204Z" level=info msg="runSandbox: removing pod sandbox from storage: 93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.850950583Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.850968964Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3fc36f79-1e16-4106-8fd5-5ba4950165fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.851193 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.851247 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.851270 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.851317 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.851488160Z" level=info msg="runSandbox: removing pod sandbox from storage: 081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.851521087Z" level=info msg="runSandbox: removing pod sandbox from storage: 9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.851527152Z" level=info msg="runSandbox: removing pod sandbox from storage: d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.851531960Z" level=info msg="runSandbox: removing pod sandbox from storage: 72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.855059593Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.855079463Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4fbd4dea-6452-48ca-a83d-8b5b4de4e296 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.855349 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.855394 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.855418 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.855460 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.861889649Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.861914182Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=6b1c0d8f-6257-43bb-aef3-c3466f1a0430 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.862108 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.862156 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.862178 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.862226 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.865289134Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.865308773Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5c26e8c6-05b8-4ba6-af19-b874a729b934 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.865623 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.865660 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.865683 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.865725 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.868249185Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.868266454Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e88c2461-22a1-4634-b2e3-54c34d1ae6c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.868489 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.868527 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.868553 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:27.868598 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.878719 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.878808 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.878982 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879037422Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879068917Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.879104 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879128563Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879160325Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.879163 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879409915Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879426624Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879446530Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879481510Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879512717Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.879541064Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890121 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890139 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890150 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890157 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890162 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890169 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:27.890175 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.905479949Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f4e587a7-6020-4a9b-945d-ed67fab2e0ec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.905501688Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.905739752Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/a3fcf706-c374-4959-89c9-d11844fe4f52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.905756208Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.907267786Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/cad1a329-b253-49c5-8b6c-fc2277bb45f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.907287322Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.908077512Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/2dfa328e-8bfc-4ac6-b226-602ce53c2768 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.908100580Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.909220537Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b0a6643c-a9fb-4dc2-91e1-1a136e5ee608 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:27.909241424Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:28.143623182Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:14:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-93a0fe316911bcbed64ae1c8a405e6e52bdd3d16fc9b3534fe4915f3652b93c0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-081b4c84e048d851594f5b2c9998a8857ed91901fc29a13c9e0d6a0176712144-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-72be420fc75a591566cdc8bfc08e08948f257b07ec0905f3055cb3c4b6f2c610-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d9763f366720d42747a8a642099d9b8b7673d4e4ef19c7fa02fd7d71e1e624fe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9ffd6fd8e4505ca4c7d5e73847311c3a9ccb3112580224e1611d9641af9046be-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:14:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:32.996760 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:14:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:32.997460 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:14:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:36.022604450Z" level=info msg="NetworkStart: stopping network for sandbox 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:36.022967013Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7ebbcdc0-6afe-462d-893c-cc5f873bccca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:36.022991935Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:36.022998914Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:36.023006509Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028108875Z" level=info msg="NetworkStart: stopping network for sandbox ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028260714Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/df7b5b0b-4300-4509-8985-ad3da3c23e4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028287588Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028295537Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028303330Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.028834543Z" level=info msg="NetworkStart: stopping network for sandbox 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.029004700Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/45614214-3cdf-42aa-b1d5-41de8f253651 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.029034832Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.029042495Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:37.029049678Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:40.021353487Z" level=info msg="NetworkStart: stopping network for sandbox 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:40.021488745Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/71e76ddb-5f15-4750-8845-584f19b5a631 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:40.021513032Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:40.021519521Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:40.021526073Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:41.024259738Z" level=info msg="NetworkStart: stopping network for sandbox f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:41.024437199Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/0bf01a7f-2e0b-411c-aa70-efe1f3031237 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:41.024461413Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:41.024467993Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:41.024475288Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.025522384Z" level=info msg="NetworkStart: stopping network for sandbox 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.025655271Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/5a8143b7-fd73-4019-9ec3-f2ab19398aad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.025677454Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.025683893Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.025690437Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.027352282Z" level=info msg="NetworkStart: stopping network for sandbox f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.027458951Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/c75ac5f5-60cc-49b4-8a96-a4c80cc25b0c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.027478770Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.027485119Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:43.027491256Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:44.996877 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:14:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:44.997378 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.027777439Z" level=info msg="NetworkStart: stopping network for sandbox 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.027920927Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/8a1f00e4-a1aa-4806-ba24-0617b0e44cb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.027944892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.027952927Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.027958798Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.029568700Z" level=info msg="NetworkStart: stopping network for sandbox 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.029746757Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/3fd5da66-1829-48ae-99ea-59c2077e5496 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.029773104Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.029780332Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:46.029786406Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:48.021963552Z" level=info msg="NetworkStart: stopping network for sandbox 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:48.022102033Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/a334a83c-5c29-4a8f-b778-7e96c83f6367 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:48.022124061Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:48.022131087Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:48.022137581Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:52.023539146Z" level=info msg="NetworkStart: stopping network for sandbox d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:14:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:52.023739427Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dee9c5fb-fd80-4ba7-9b2a-67f86cb5eb23 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:14:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:52.023763322Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:14:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:52.023769746Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:14:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:52.023775472Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:14:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:14:58.142627060Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:14:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:14:59.996920 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:14:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:14:59.997620 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.033746534Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.033788673Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount has successfully entered the 'dead' state. Jan 23 17:15:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount has successfully entered the 'dead' state. Jan 23 17:15:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-550d237f\x2d898d\x2d4814\x2dbedc\x2dc9b477f54090.mount has successfully entered the 'dead' state. Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.074363766Z" level=info msg="runSandbox: deleting pod ID 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437 from idIndex" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.074390054Z" level=info msg="runSandbox: removing pod sandbox 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.074408673Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.074421878Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.098444820Z" level=info msg="runSandbox: removing pod sandbox from storage: 6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.101771583Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:05.101792095Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=1c72108f-a9b7-4d82-98f3-55cdfbb29bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:05.102013 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:05.102061 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:05.102083 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:05.102133 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(6788f7e74c522e1e33da12f1f4d4fcc1b56440af919ca525ab1a027f658ab437): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.918599018Z" level=info msg="NetworkStart: stopping network for sandbox 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.918736084Z" level=info msg="NetworkStart: stopping network for sandbox b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.918980569Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/a3fcf706-c374-4959-89c9-d11844fe4f52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.918994150Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f4e587a7-6020-4a9b-945d-ed67fab2e0ec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919004303Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919011081Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919015625Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919023382Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919029944Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.919018899Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.920554967Z" level=info msg="NetworkStart: stopping network for sandbox 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.920707848Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/2dfa328e-8bfc-4ac6-b226-602ce53c2768 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.920732719Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.920740247Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.920750746Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921253725Z" level=info msg="NetworkStart: stopping network for sandbox 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921367712Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/cad1a329-b253-49c5-8b6c-fc2277bb45f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921388931Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921395718Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921401874Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.921916044Z" level=info msg="NetworkStart: stopping network for sandbox 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.922070130Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b0a6643c-a9fb-4dc2-91e1-1a136e5ee608 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.922102657Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.922113593Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:15:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:12.922123560Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:14.996372 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:15:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:14.996889 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:15:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:19.995956 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:19.996353913Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:19.996417017Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:20.012903598Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/86837049-db93-4b1b-9ca7-2b9289a27041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:20.012933279Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.034145458Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.034184031Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount has successfully entered the 'dead' state. Jan 23 17:15:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount has successfully entered the 'dead' state. Jan 23 17:15:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7ebbcdc0\x2d6afe\x2d462d\x2d893c\x2dcc5f873bccca.mount has successfully entered the 'dead' state. Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.068315288Z" level=info msg="runSandbox: deleting pod ID 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2 from idIndex" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.068341936Z" level=info msg="runSandbox: removing pod sandbox 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.068355506Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.068368430Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.088473968Z" level=info msg="runSandbox: removing pod sandbox from storage: 9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.091858771Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:21.091880602Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=7d163d64-b2b0-48a5-8975-10b08cbab528 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:21.092030 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:21.092079 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:21.092102 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:21.092161 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9a861b8613108f9f012bc9679ff2c5132e835a2e114052d2fde68530bacc85e2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.039152855Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.039212434Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.040082683Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.040126277Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-45614214\x2d3cdf\x2d42aa\x2db1d5\x2d41de8f253651.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-df7b5b0b\x2d4300\x2d4509\x2d8985\x2dad3da3c23e4e.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092337055Z" level=info msg="runSandbox: deleting pod ID ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb from idIndex" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092371083Z" level=info msg="runSandbox: removing pod sandbox ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092389747Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092405440Z" level=info msg="runSandbox: unmounting shmPath for sandbox ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092425587Z" level=info msg="runSandbox: deleting pod ID 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531 from idIndex" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092456534Z" level=info msg="runSandbox: removing pod sandbox 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092473876Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.092488432Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.104446985Z" level=info msg="runSandbox: removing pod sandbox from storage: 47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.108178023Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.108197626Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0224d95f-6ad6-4a50-b227-8177e462e0d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.108470446Z" level=info msg="runSandbox: removing pod sandbox from storage: ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.108666 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.108710 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.108732 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.108775 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(47b3bbfa38f45a97742b49c8be0e2740794a304e054a7e06854722416a7eb531): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.111886776Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:22.111905885Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=c3c648b3-2b2f-45c4-9dfd-bb646a867644 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.112095 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.112134 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.112156 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:15:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:22.112200 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ef68e34c1fa515e91a166745a1047295a6e51fd57445d16564192e826f9a50fb): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.032075707Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.032117030Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount has successfully entered the 'dead' state. Jan 23 17:15:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount has successfully entered the 'dead' state. Jan 23 17:15:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-71e76ddb\x2d5f15\x2d4750\x2d8845\x2d584f19b5a631.mount has successfully entered the 'dead' state. Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.079367839Z" level=info msg="runSandbox: deleting pod ID 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91 from idIndex" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.079395248Z" level=info msg="runSandbox: removing pod sandbox 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.079416454Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.079427561Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.095466474Z" level=info msg="runSandbox: removing pod sandbox from storage: 0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.099137651Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:25.099157722Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=bb355168-837e-43c1-9f63-7f758de11969 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:25.099370 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:25.099534 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:15:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:25.099557 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:15:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:25.099609 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(0ce2676300e0fa8f94d534978a402e94137cf660602a877f037dd68d6e530b91): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.034849654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.034886084Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount has successfully entered the 'dead' state. Jan 23 17:15:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount has successfully entered the 'dead' state. Jan 23 17:15:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0bf01a7f\x2d2e0b\x2d411c\x2daa70\x2defe1f3031237.mount has successfully entered the 'dead' state. Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.097306970Z" level=info msg="runSandbox: deleting pod ID f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf from idIndex" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.097331266Z" level=info msg="runSandbox: removing pod sandbox f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.097343664Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.097357037Z" level=info msg="runSandbox: unmounting shmPath for sandbox f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.119406575Z" level=info msg="runSandbox: removing pod sandbox from storage: f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.123007517Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:26.123026007Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=f1a6cac7-3ecb-4a50-9e66-180918477efe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:26.123248 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:26.123290 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:26.123310 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:26.123360 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f6909ded1133ee4ae7f0104ca954148eddbdd5fded9b03e33de4c9e54b4ba8cf): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890366 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890386 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890393 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890398 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890404 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890410 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.890417 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:27.897146948Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=c9e1b728-8621-4439-a68f-5380d8bf7909 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:15:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:27.897306066Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c9e1b728-8621-4439-a68f-5380d8bf7909 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:27.996998 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:15:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:27.997508 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.037515276Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.037551258Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.037560419Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.037601476Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c75ac5f5\x2d60cc\x2d49b4\x2d8a96\x2da4c80cc25b0c.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a8143b7\x2dfd73\x2d4019\x2d9ec3\x2df2ab19398aad.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.083309329Z" level=info msg="runSandbox: deleting pod ID 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1 from idIndex" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.083334060Z" level=info msg="runSandbox: removing pod sandbox 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.083347477Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.083359587Z" level=info msg="runSandbox: unmounting shmPath for sandbox 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.084312149Z" level=info msg="runSandbox: deleting pod ID f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef from idIndex" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.084335731Z" level=info msg="runSandbox: removing pod sandbox f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.084349459Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.084360953Z" level=info msg="runSandbox: unmounting shmPath for sandbox f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.104448427Z" level=info msg="runSandbox: removing pod sandbox from storage: f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.104450311Z" level=info msg="runSandbox: removing pod sandbox from storage: 603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.108153242Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.108172495Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=716e1fb4-d76f-4e50-83bc-e55ba9543e40 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.108487 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.108528 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.108549 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.108595 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.111696669Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.111719112Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=dd9d6de4-19b3-4d37-8ec1-63330ee79e17 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.111829 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.111864 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.111886 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:15:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:28.111924 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(603a8da793c7568ad0ce6d3b4bcc865e509953e095ec202e852dd4b87ccf24d1): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:15:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:28.142646195Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:15:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f77b15dbebb837cbf27b3660eef9f2e2e6f972d03c2d4791bc27fae2213404ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.039213446Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.039431911Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.042029452Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.042072014Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091336460Z" level=info msg="runSandbox: deleting pod ID 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d from idIndex" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091372593Z" level=info msg="runSandbox: removing pod sandbox 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091392627Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091410356Z" level=info msg="runSandbox: unmounting shmPath for sandbox 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091345225Z" level=info msg="runSandbox: deleting pod ID 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e from idIndex" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091459000Z" level=info msg="runSandbox: removing pod sandbox 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091473143Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.091487072Z" level=info msg="runSandbox: unmounting shmPath for sandbox 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.103463308Z" level=info msg="runSandbox: removing pod sandbox from storage: 63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.103484586Z" level=info msg="runSandbox: removing pod sandbox from storage: 79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.107195178Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.107222545Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=ec6b7113-b778-449c-bab0-979c51ec5db9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.107381 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.107425 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.107449 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.107496 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.110610167Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:31.110632422Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=1816db92-f6a7-42d5-ae80-43b14521d8b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.110829 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.110869 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.110891 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:31.110936 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8a1f00e4\x2da1aa\x2d4806\x2dba24\x2d0617b0e44cb7.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3fd5da66\x2d1829\x2d48ae\x2d99ea\x2d59c2077e5496.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-63946e98f774ef7eb9dac87c1e25506439f2cd1f77c45b4e70037dec98d9d89e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-79eaf0730d8ac4007aab307836c6eb858b5b14f636444b1fa7225600aa885c5d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:32.995622 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:32.995754 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:32.995979905Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:32.996039110Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:32.996133002Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:32.996177907Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.011627406Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/5f668fc3-abc0-42c2-9bf7-fc5df811aa6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.011647940Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.012063290Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/af98bb39-3600-440b-97d3-9cf3a31565d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.012082488Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.032603722Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.032634178Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount has successfully entered the 'dead' state. Jan 23 17:15:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount has successfully entered the 'dead' state. Jan 23 17:15:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a334a83c\x2d5c29\x2d4a8f\x2db778\x2d7e96c83f6367.mount has successfully entered the 'dead' state. Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.072306362Z" level=info msg="runSandbox: deleting pod ID 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe from idIndex" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.072329422Z" level=info msg="runSandbox: removing pod sandbox 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.072342702Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.072353344Z" level=info msg="runSandbox: unmounting shmPath for sandbox 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.096432532Z" level=info msg="runSandbox: removing pod sandbox from storage: 86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.099280699Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.099298550Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=de09b54c-7b7f-47bd-8c57-0e70fab19304 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:33.099485 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:33.099525 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:33.099549 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:33.099594 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:15:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:33.996248 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.996613939Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:33.996650159Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:34.007381718Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/f82af5ab-7cb8-4320-80d9-a308d0c01565 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:34.007403072Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-86dac9e612431ece97e4f4dfa7a44c6d27fb03dd1223198850dc3538dceceabe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.034829980Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.034881284Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount has successfully entered the 'dead' state. Jan 23 17:15:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount has successfully entered the 'dead' state. Jan 23 17:15:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dee9c5fb\x2dfd80\x2d4ba7\x2d9b2a\x2d67f86cb5eb23.mount has successfully entered the 'dead' state. Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.072329050Z" level=info msg="runSandbox: deleting pod ID d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0 from idIndex" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.072356992Z" level=info msg="runSandbox: removing pod sandbox d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.072374454Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.072387312Z" level=info msg="runSandbox: unmounting shmPath for sandbox d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.093476263Z" level=info msg="runSandbox: removing pod sandbox from storage: d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.096764276Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:37.096786828Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b208d9c4-10e3-4dcf-bd78-0d48ec39fbc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:37.097029 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:37.097187 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:37.097227 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:37.097278 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(d5d44a63d0499d603eefc309c2d5c82749f814fccb67f249415aad22ca6d6fd0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:15:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:38.996078 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:38.996374482Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:38.996408241Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.007560373Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/92a3798c-4df2-4928-bf39-8026cb6e8a8a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.007580170Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:39.996037 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:15:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:39.996233 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.996361031Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.996390297Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.996473629Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:39.996503460Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:40.011882927Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8cc5cbdf-6af0-4e87-b9fa-b9d77aa654b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:40.011902792Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:40.012097106Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/add528a2-f01b-415d-bb1a-a1fb0cbdf457 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:40.012117083Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:41.996127 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:15:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:41.996612083Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:41.996664357Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:42.011355655Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8dc018ea-a0b9-48ab-be67-270c37747511 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:42.011384670Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:42.997151 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:15:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:42.997658 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:15:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:44.996317 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:15:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:44.996520 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:44.996740443Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:44.996786786Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:44.996835269Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:44.996797275Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.012660261Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/c5af0a75-168c-4a86-91c7-d638aa1520f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.012682552Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.013428294Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/47b33375-96e2-4a0c-935b-8e273079de2e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.013447487Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:45.995994 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.996295895Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:45.996329380Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:46.006769256Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ecd99b75-2536-40f7-b4e5-65273d572777 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:46.006792547Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:50.995625 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:50.995948805Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:50.995989482Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:51.007691811Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/58192b4a-1d67-46e4-9ca5-97a0b26d1aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:51.007711486Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931504541Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931759672Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931613869Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931888374Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931623258Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931942324Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931623715Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.931992816Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.933590930Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.933630540Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2dfa328e\x2d8bfc\x2d4ac6\x2db226\x2d602ce53c2768.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cad1a329\x2db253\x2d49c5\x2d8b6c\x2dfc2277bb45f8.mount has successfully entered the 'dead' state. Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982332952Z" level=info msg="runSandbox: deleting pod ID 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72 from idIndex" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982362891Z" level=info msg="runSandbox: removing pod sandbox 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982376982Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982388494Z" level=info msg="runSandbox: unmounting shmPath for sandbox 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982334291Z" level=info msg="runSandbox: deleting pod ID 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42 from idIndex" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982460230Z" level=info msg="runSandbox: removing pod sandbox 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982478294Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982493794Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982337314Z" level=info msg="runSandbox: deleting pod ID 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309 from idIndex" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982584744Z" level=info msg="runSandbox: removing pod sandbox 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982597904Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.982608871Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983295020Z" level=info msg="runSandbox: deleting pod ID 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f from idIndex" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983326187Z" level=info msg="runSandbox: removing pod sandbox 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983345459Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983362053Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983298548Z" level=info msg="runSandbox: deleting pod ID b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38 from idIndex" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983405895Z" level=info msg="runSandbox: removing pod sandbox b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983417343Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.983430769Z" level=info msg="runSandbox: unmounting shmPath for sandbox b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:57.997559 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:15:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:57.998156 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.998450955Z" level=info msg="runSandbox: removing pod sandbox from storage: 05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.998479805Z" level=info msg="runSandbox: removing pod sandbox from storage: 8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.998454275Z" level=info msg="runSandbox: removing pod sandbox from storage: 1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.999416513Z" level=info msg="runSandbox: removing pod sandbox from storage: 0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:57.999593054Z" level=info msg="runSandbox: removing pod sandbox from storage: b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.001753155Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.001772166Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=dd4647ec-f4fe-4885-bb05-c9e58266405f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.001988 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.002026 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.002051 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.002104 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.004913357Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.004935218Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=aa9aa35b-e15d-4e66-b7ae-a06cd3c1591d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.005172 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.005225 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.005253 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.005301 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.007896147Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.007912450Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=85643e84-129e-43a5-a96d-f65abf07eff2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.008106 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.008142 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.008163 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.008210 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.015830386Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.015856003Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3bac2a64-dffc-44cc-8916-2332e0a73226 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.016099 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.016136 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.016170 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.016226 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.018888660Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.018907744Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d222bb82-036c-4260-b4bf-a45a6c084fe7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.019117 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.019152 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.019173 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:15:58.019217 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:58.041287 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:58.041361 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:58.041499 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:58.041648 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041636590Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041679145Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:15:58.041746 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041749551Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041784500Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041831527Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041874876Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041910279Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041935256Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.041841405Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.042001255Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.068882518Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/96163bb9-0024-44d4-acca-6ff0163677dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.068902454Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.069801025Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f8d1169d-dc79-4a29-86be-d5913f2b1b14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.069826363Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.070545446Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7fa0871b-d45f-4c98-a0b2-dddb2ef7d718 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.070568787Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.074724458Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c99529ea-cadb-4781-9fda-cb7b61984666 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.074747905Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.075513946Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/4e1744c4-5637-455f-b11a-502b3eb68026 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.075533397Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:15:58.142525239Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b0a6643c\x2da9fb\x2d4dc2\x2d91e1\x2d1a136e5ee608.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a3fcf706\x2dc374\x2d4959\x2d89c9\x2dd11844fe4f52.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f4e587a7\x2d6020\x2d4a9b\x2d945d\x2ded67fab2e0ec.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0af9be5e4be94cf3e0c656f94847fb94ed5adfee02043bd89f730868ddf8274f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8cc2200bca194258380d998f1fe94f839e4acf2ce61bf18a21c6d0e01cc40309-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-05dbb4cb77859613d513043a71ea3a9ca5610cd7109815d6de66606a9ee26a72-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1499da94dfb4c9d09278b19205260108d1fa8e1aa2d66105be6cbe81e4799a42-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:15:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b2ae0aee464e6f21e775e59696492bf2f60c79fb0bce4b8f0e1a69a487609d38-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:05.027490883Z" level=info msg="NetworkStart: stopping network for sandbox fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:05.028432483Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/86837049-db93-4b1b-9ca7-2b9289a27041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:05.028478970Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:05.028487375Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:05.028494854Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:09 hub-master-0.workload.bos2.lab conmon[105784]: conmon 4cd7e96020d7236a9b69 : container 105795 exited with status 1 Jan 23 17:16:09 hub-master-0.workload.bos2.lab systemd[1]: crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has successfully entered the 'dead' state. Jan 23 17:16:09 hub-master-0.workload.bos2.lab systemd[1]: crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope: Consumed 3.709s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope completed and consumed the indicated resources. Jan 23 17:16:09 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope has successfully entered the 'dead' state. Jan 23 17:16:09 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314.scope completed and consumed the indicated resources. Jan 23 17:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:09.996945 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:16:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:09.997596 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:10.063913 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314" exitCode=1 Jan 23 17:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:10.063937 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314} Jan 23 17:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:10.063958 8631 scope.go:115] "RemoveContainer" containerID="ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5" Jan 23 17:16:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:10.064217 8631 scope.go:115] "RemoveContainer" containerID="4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.064723574Z" level=info msg="Removing container: ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5" id=837c1a7c-b8c5-4fd3-b7c2-1ef897145c5c name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.064762716Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=7eaaf157-4bef-4525-9036-b57adc9cbc1b name=/runtime.v1.ImageService/ImageStatus Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.064966775Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7eaaf157-4bef-4525-9036-b57adc9cbc1b name=/runtime.v1.ImageService/ImageStatus Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.065512526Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=1e087435-a79e-4329-9a63-2d4aafec9544 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.065646355Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1e087435-a79e-4329-9a63-2d4aafec9544 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.066132255Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=d6feb2be-293e-4598-b48b-9bdb1225e82d name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.066217664Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:16:10 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-6d2a7a510beb6c21d16be46fbd3849a1e9c43b02442bf8648741c17f6354d2e3-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-6d2a7a510beb6c21d16be46fbd3849a1e9c43b02442bf8648741c17f6354d2e3-merged.mount has successfully entered the 'dead' state. Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.101809521Z" level=info msg="Removed container ac84125bfc286157076c81042f6e373e20d7fd71b60c4399130086fbba8ab4f5: openshift-multus/multus-cdt6c/kube-multus" id=837c1a7c-b8c5-4fd3-b7c2-1ef897145c5c name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:16:10 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope. -- Subject: Unit crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:16:10 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464. -- Subject: Unit crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.203830803Z" level=info msg="Created container 628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464: openshift-multus/multus-cdt6c/kube-multus" id=d6feb2be-293e-4598-b48b-9bdb1225e82d name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.204263060Z" level=info msg="Starting container: 628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464" id=0b78a77d-7bdb-4fc0-b45a-ec30f8fc7a29 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.210898471Z" level=info msg="Started container" PID=123846 containerID=628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464 description=openshift-multus/multus-cdt6c/kube-multus id=0b78a77d-7bdb-4fc0-b45a-ec30f8fc7a29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.215441505Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_73bb4394-c2a3-4e50-a8cf-e66c92ced646\"" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.225615768Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.225632539Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.237224802Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.246693425Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.246710633Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:16:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:10.246721877Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_73bb4394-c2a3-4e50-a8cf-e66c92ced646\"" Jan 23 17:16:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:11.067343 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464} Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.025643614Z" level=info msg="NetworkStart: stopping network for sandbox 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.025961678Z" level=info msg="NetworkStart: stopping network for sandbox 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026010728Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/af98bb39-3600-440b-97d3-9cf3a31565d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026038408Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026045000Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026052245Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026076483Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/5f668fc3-abc0-42c2-9bf7-fc5df811aa6a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026097717Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026103732Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:18.026110188Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:19.021224940Z" level=info msg="NetworkStart: stopping network for sandbox da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:19.021367083Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/f82af5ab-7cb8-4320-80d9-a308d0c01565 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:19.021389466Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:19.021397158Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:19.021403880Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:24.021666158Z" level=info msg="NetworkStart: stopping network for sandbox 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:24.021816192Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/92a3798c-4df2-4928-bf39-8026cb6e8a8a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:24.021839705Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:24.021847150Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:24.021854235Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:24.997247 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:16:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:24.997741 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.024551620Z" level=info msg="NetworkStart: stopping network for sandbox bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.024694582Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/add528a2-f01b-415d-bb1a-a1fb0cbdf457 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.024718907Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.024725479Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.024732214Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.026009788Z" level=info msg="NetworkStart: stopping network for sandbox 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.026125750Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8cc5cbdf-6af0-4e87-b9fa-b9d77aa654b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.026146434Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.026153711Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:25.026160151Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:27.024004901Z" level=info msg="NetworkStart: stopping network for sandbox 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:27.024164978Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8dc018ea-a0b9-48ab-be67-270c37747511 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:27.024193778Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:27.024201084Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:27.024218560Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891412 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891431 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891437 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891444 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891450 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891457 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:27.891463 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:16:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:28.143593454Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026309127Z" level=info msg="NetworkStart: stopping network for sandbox 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026515462Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/47b33375-96e2-4a0c-935b-8e273079de2e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026542698Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026550943Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026558821Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026543658Z" level=info msg="NetworkStart: stopping network for sandbox 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026704503Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/c5af0a75-168c-4a86-91c7-d638aa1520f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026726512Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026733620Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:30.026741326Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:31.021691904Z" level=info msg="NetworkStart: stopping network for sandbox 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:31.021852988Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ecd99b75-2536-40f7-b4e5-65273d572777 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:31.021878428Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:31.021885660Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:31.021893139Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:36.021490102Z" level=info msg="NetworkStart: stopping network for sandbox 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:36.021623756Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/58192b4a-1d67-46e4-9ca5-97a0b26d1aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:36.021646461Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:36.021653247Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:36.021660210Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:39.996387 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:16:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:39.997027 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.082848316Z" level=info msg="NetworkStart: stopping network for sandbox 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.082993732Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/96163bb9-0024-44d4-acca-6ff0163677dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083017770Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083025346Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083031842Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083550574Z" level=info msg="NetworkStart: stopping network for sandbox bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083708220Z" level=info msg="NetworkStart: stopping network for sandbox cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083710249Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f8d1169d-dc79-4a29-86be-d5913f2b1b14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083764459Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083773187Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083780765Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083850519Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7fa0871b-d45f-4c98-a0b2-dddb2ef7d718 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083877855Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083885933Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.083894002Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.086993879Z" level=info msg="NetworkStart: stopping network for sandbox 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.087106320Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/4e1744c4-5637-455f-b11a-502b3eb68026 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.087127986Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.087135468Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.087142223Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.088395945Z" level=info msg="NetworkStart: stopping network for sandbox 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.088511584Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c99529ea-cadb-4781-9fda-cb7b61984666 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.088533696Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.088539834Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:16:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:43.088545720Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.043060756Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.043101292Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount has successfully entered the 'dead' state. Jan 23 17:16:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount has successfully entered the 'dead' state. Jan 23 17:16:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-86837049\x2ddb93\x2d4b1b\x2d9ca7\x2d2b9289a27041.mount has successfully entered the 'dead' state. Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.076369090Z" level=info msg="runSandbox: deleting pod ID fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037 from idIndex" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.076563593Z" level=info msg="runSandbox: removing pod sandbox fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.076577924Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.076590696Z" level=info msg="runSandbox: unmounting shmPath for sandbox fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.090462084Z" level=info msg="runSandbox: removing pod sandbox from storage: fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.094111017Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:50.094130831Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a2c692f2-c9dd-4990-af7f-bed63f36e4ad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:16:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:50.094433 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:16:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:50.094478 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:16:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:50.094502 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:16:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:50.094552 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(fa6bcb3a226c1513532ce69bbe4d96940ab928f602de99512a56ea0262057037): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:16:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:16:54.996337 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:16:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:16:54.996844 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:16:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:16:58.142376401Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.038281950Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.038323396Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.038303949Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.038438333Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-af98bb39\x2d3600\x2d440b\x2d97d3\x2d9cf3a31565d7.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f668fc3\x2dabc0\x2d42c2\x2d9bf7\x2dfc5df811aa6a.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081340941Z" level=info msg="runSandbox: deleting pod ID 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28 from idIndex" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081376978Z" level=info msg="runSandbox: removing pod sandbox 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081395767Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081342555Z" level=info msg="runSandbox: deleting pod ID 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54 from idIndex" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081434917Z" level=info msg="runSandbox: removing pod sandbox 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081449137Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081468913Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.081451293Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.093429356Z" level=info msg="runSandbox: removing pod sandbox from storage: 6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.093463945Z" level=info msg="runSandbox: removing pod sandbox from storage: 8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.097160139Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.097180657Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=b284a09c-d39d-4989-a119-590331f6a4c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.097458 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.097654 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.097674 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.097719 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6935d3f9b35122a4e8880c24990f1ad6633b9d4b4400f73e342c5a7a8de37e28): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.105241013Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.105271447Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=8750b544-46d0-4f0c-a7a4-8e9169bba673 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.105508 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.105553 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.105577 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:03.105624 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8c58e488cc681e223be1eb511fe7c4b8fd576bacbf9575a024bc77d596e98b54): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:17:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:03.996102 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.996479160Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:03.996525934Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.009120348Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c9f13c71-558a-4644-bce7-9ddfee03b93c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.009147620Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.032582108Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.032615269Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount has successfully entered the 'dead' state. Jan 23 17:17:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount has successfully entered the 'dead' state. Jan 23 17:17:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f82af5ab\x2d7cb8\x2d4320\x2d80d9\x2da308d0c01565.mount has successfully entered the 'dead' state. Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.070309006Z" level=info msg="runSandbox: deleting pod ID da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8 from idIndex" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.070332131Z" level=info msg="runSandbox: removing pod sandbox da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.070346049Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.070357334Z" level=info msg="runSandbox: unmounting shmPath for sandbox da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.083448113Z" level=info msg="runSandbox: removing pod sandbox from storage: da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.086425088Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:04.086442163Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=e500f0e8-557d-4686-8d1e-e65731a65595 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:04.086644 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:04.086689 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:17:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:04.086711 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:17:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:04.086753 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(da3688108478e822140aa735982db14209ef8aac400a12c901c3530b43d889e8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:17:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:06.996032 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:17:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:06.996536 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1350] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1356] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1356] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1358] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1363] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:17:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494228.1369] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.033631851Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.033899465Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount has successfully entered the 'dead' state. Jan 23 17:17:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount has successfully entered the 'dead' state. Jan 23 17:17:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-92a3798c\x2d4df2\x2d4928\x2dbf39\x2d8026cb6e8a8a.mount has successfully entered the 'dead' state. Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.071318364Z" level=info msg="runSandbox: deleting pod ID 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4 from idIndex" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.071344191Z" level=info msg="runSandbox: removing pod sandbox 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.071358645Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.071370066Z" level=info msg="runSandbox: unmounting shmPath for sandbox 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.084478947Z" level=info msg="runSandbox: removing pod sandbox from storage: 871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.088033858Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:09.088052051Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0d1a5fab-044e-4658-ba89-c7bc700bd096 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:09.088271 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:09.088315 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:09.088338 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:09.088382 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871ac5ee6c07d6142ff3df6588eaffb55977047b8b1d2148ad2f8f4131ee16f4): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.037012811Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.037057896Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.037683769Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.037716227Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount has successfully entered the 'dead' state. Jan 23 17:17:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount has successfully entered the 'dead' state. Jan 23 17:17:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount has successfully entered the 'dead' state. Jan 23 17:17:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount has successfully entered the 'dead' state. Jan 23 17:17:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-add528a2\x2df01b\x2d415d\x2dbb1a\x2da1fb0cbdf457.mount has successfully entered the 'dead' state. Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076326663Z" level=info msg="runSandbox: deleting pod ID 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d from idIndex" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076352259Z" level=info msg="runSandbox: removing pod sandbox 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076365292Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076377871Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076331715Z" level=info msg="runSandbox: deleting pod ID bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e from idIndex" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076439992Z" level=info msg="runSandbox: removing pod sandbox bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076452239Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.076463534Z" level=info msg="runSandbox: unmounting shmPath for sandbox bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.092473435Z" level=info msg="runSandbox: removing pod sandbox from storage: 7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.092477206Z" level=info msg="runSandbox: removing pod sandbox from storage: bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.095864809Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.095883400Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=858359cd-44ab-4324-8fd8-1e3355b65010 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.096136 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.096177 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.096200 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.096248 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.098785146Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:10.098804116Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=b4d3a26e-36d1-4d92-8657-8ec42b1e01e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.098980 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.099015 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.099036 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:17:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:10.099079 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:17:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494230.3655] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:17:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8cc5cbdf\x2d6af0\x2d4e87\x2db9fa\x2db9d77aa654b2.mount has successfully entered the 'dead' state. Jan 23 17:17:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bd55aaefbb993da60dc686a976b77472f08713dc8874b6e54fba65a58ea32f7e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7bd108fecb1b83b253cca00b505bf25bb61b54030f4bcd87bae69a246987c84d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.035168214Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.035214913Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount has successfully entered the 'dead' state. Jan 23 17:17:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount has successfully entered the 'dead' state. Jan 23 17:17:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8dc018ea\x2da0b9\x2d48ab\x2dbe67\x2d270c37747511.mount has successfully entered the 'dead' state. Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.080347606Z" level=info msg="runSandbox: deleting pod ID 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869 from idIndex" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.080379460Z" level=info msg="runSandbox: removing pod sandbox 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.080396215Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.080413121Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.091452163Z" level=info msg="runSandbox: removing pod sandbox from storage: 5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.095038497Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:12.095057938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=db1ad2c6-a197-4b15-9674-05fe365361bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:12.095315 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:12.095364 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:17:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:12.095392 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:17:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:12.095444 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5221cb186db81476f64004804632264eae0ac7e8b7a618e21893111eb3949869): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.037475529Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.037528927Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.037961353Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.037995845Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount has successfully entered the 'dead' state. Jan 23 17:17:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount has successfully entered the 'dead' state. Jan 23 17:17:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount has successfully entered the 'dead' state. Jan 23 17:17:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount has successfully entered the 'dead' state. Jan 23 17:17:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-47b33375\x2d96e2\x2d4a0c\x2d935b\x2d8e273079de2e.mount has successfully entered the 'dead' state. Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.089440720Z" level=info msg="runSandbox: deleting pod ID 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6 from idIndex" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.089471636Z" level=info msg="runSandbox: removing pod sandbox 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.089488351Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.089508959Z" level=info msg="runSandbox: unmounting shmPath for sandbox 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.097310997Z" level=info msg="runSandbox: deleting pod ID 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843 from idIndex" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.097340262Z" level=info msg="runSandbox: removing pod sandbox 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.097356717Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.097368415Z" level=info msg="runSandbox: unmounting shmPath for sandbox 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.102433014Z" level=info msg="runSandbox: removing pod sandbox from storage: 07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.105860049Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.105878349Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=473e8ea9-2a41-4897-8217-cb114531e4dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.106131 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.106346 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.106369 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.106417 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.110448261Z" level=info msg="runSandbox: removing pod sandbox from storage: 88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.113846794Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.113865887Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9fbe2e60-1eb5-4dd1-8931-78fbfd70292c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.114063 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.114108 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.114132 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:15.114177 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:17:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:15.995729 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.996070368Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:15.996105210Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.007352869Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1c8fc595-7b9f-45f1-8e3d-aba2f84f6cf7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.007372179Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.033865255Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.033894067Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c5af0a75\x2d168c\x2d4a86\x2d91c7\x2dd638aa1520f9.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-07619ccdaa01655d7b3611d0ba7608c2f57f5e8f45cf3af9a243e622126f97a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-88a088ce5dc5bbd0fd28a7a72558b47017af92b41657acc2dfff99420fb2c843-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ecd99b75\x2d2536\x2d40f7\x2db4e5\x2d65273d572777.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.074309782Z" level=info msg="runSandbox: deleting pod ID 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50 from idIndex" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.074334417Z" level=info msg="runSandbox: removing pod sandbox 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.074349063Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.074360666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.090445920Z" level=info msg="runSandbox: removing pod sandbox from storage: 3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.093424642Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:16.093442916Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=4e1492f1-2b20-4c4d-bdc8-b722f7d2d09e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:16.093678 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:16.093719 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:17:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:16.093742 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:17:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:16.093789 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(3307962a0a4919c4d48f57c27afb77163d9a14edc19065cc652e6ab163b7bb50): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:17:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:18.995980 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:17:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:18.996077 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:18.996331124Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:18.996371125Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:18.996417764Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:18.996444620Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:19.015572368Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/518f2cd5-5e38-4b39-86ea-d1663d436e26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:19.015599480Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:19.015690563Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/c51c7cc1-ac8f-4d76-8730-a1fd2fde749b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:19.015711160Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.034043012Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.034080099Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount has successfully entered the 'dead' state. Jan 23 17:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount has successfully entered the 'dead' state. Jan 23 17:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-58192b4a\x2d1d67\x2d46e4\x2d9ca5\x2d97a0b26d1aa7.mount has successfully entered the 'dead' state. Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.087325432Z" level=info msg="runSandbox: deleting pod ID 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3 from idIndex" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.087349809Z" level=info msg="runSandbox: removing pod sandbox 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.087364381Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.087375945Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.103432224Z" level=info msg="runSandbox: removing pod sandbox from storage: 1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.106268791Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:21.106287996Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e7aac037-aa74-446e-987c-ebc911d4555b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:21.106482 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:21.106524 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:21.106546 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:21.106599 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(1651dd0dc1e5a0e481c68212606e96eb4a6068fc791f1b92c07ee9131f63f4a3): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:21.996929 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:17:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:21.997497 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:17:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:22.996335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:17:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:22.996646449Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:22.996688415Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.008299988Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/44ddd100-eeed-4ca6-9473-4b3c0e79bec2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.008319889Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:23.996302 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:17:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:23.996432 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.996609830Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.996872466Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.996724840Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:23.997000807Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:24.011913609Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/83ad8eef-b629-410a-9092-64681b5c182e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:24.011933355Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:24.012437980Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2ec07138-56c6-41a2-978f-2faaf25aa53a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:24.012458795Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:25.995801 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:17:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:25.996193039Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:25.996240845Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:26.007090896Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b8fecfe4-1ef8-4743-8a0a-5e742437f398 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:26.007119724Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:26.996407 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:17:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:26.996792808Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:26.996831153Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:27.011662127Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/50fa198e-7eb5-419b-8aad-19bd6864dcdc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:27.011687694Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892456 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892478 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892486 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892493 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892499 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892506 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.892512 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:17:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:27.996576 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:27.996945500Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:27.996981655Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.008779397Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4e012555-2095-4fde-a124-c9d33358d375 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.008806378Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.094604855Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.094635875Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.095533718Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.095571903Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.095538232Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.095665959Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.097791908Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.097828456Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount has successfully entered the 'dead' state. Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.100114663Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.100141314Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount has successfully entered the 'dead' state. Jan 23 17:17:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount has successfully entered the 'dead' state. Jan 23 17:17:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount has successfully entered the 'dead' state. Jan 23 17:17:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount has successfully entered the 'dead' state. Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.142747571Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148335683Z" level=info msg="runSandbox: deleting pod ID 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60 from idIndex" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148365990Z" level=info msg="runSandbox: removing pod sandbox 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148339717Z" level=info msg="runSandbox: deleting pod ID cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508 from idIndex" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148407869Z" level=info msg="runSandbox: removing pod sandbox cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148425082Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148441373Z" level=info msg="runSandbox: unmounting shmPath for sandbox cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148379041Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.148502539Z" level=info msg="runSandbox: unmounting shmPath for sandbox 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149276867Z" level=info msg="runSandbox: deleting pod ID 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7 from idIndex" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149303836Z" level=info msg="runSandbox: removing pod sandbox 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149311650Z" level=info msg="runSandbox: deleting pod ID bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e from idIndex" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149341461Z" level=info msg="runSandbox: removing pod sandbox bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149355751Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149368584Z" level=info msg="runSandbox: unmounting shmPath for sandbox bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149318768Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.149422389Z" level=info msg="runSandbox: unmounting shmPath for sandbox 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.152304896Z" level=info msg="runSandbox: deleting pod ID 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221 from idIndex" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.152331071Z" level=info msg="runSandbox: removing pod sandbox 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.152343768Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.152356147Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.156419775Z" level=info msg="runSandbox: removing pod sandbox from storage: 097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.159251612Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.159269422Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=21266532-ed10-4d9c-a523-e4b1e77c247e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.159398 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.159456 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.159480 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.159533 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.160501492Z" level=info msg="runSandbox: removing pod sandbox from storage: cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.160520360Z" level=info msg="runSandbox: removing pod sandbox from storage: 173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.160547323Z" level=info msg="runSandbox: removing pod sandbox from storage: bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.164043378Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.164062662Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c298d6c2-8e30-4437-9513-7abac44916ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.164300 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.164333 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.164353 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.164393 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.166465625Z" level=info msg="runSandbox: removing pod sandbox from storage: 8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.167131038Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.167149899Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=10914c38-82db-4545-bdbf-12db1098a6c1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.167258 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.167306 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.167347 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.167406 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.170322157Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.170340362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=6037be0e-113e-49bc-a45b-02535ed09b1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.170608 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.170649 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.170673 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.170717 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.173309628Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.173327832Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=9de0bbf5-7287-4a95-bb6d-9532a8a8b3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.173439 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.173487 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.173526 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:28.173584 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.210760 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.210831 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.210967 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211084368Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211120346Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.211097 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211198626Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211232962Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211248621Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211276346Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.211266 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211353525Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211381395Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211481295Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.211500777Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.242437731Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a6a9d844-9c76-4119-946c-b4fb84983996 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.242470510Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.243385217Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/54f2c298-5e62-494a-ad44-426771e051e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.243404824Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.244484952Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f3e69e3c-ac1d-4c12-81f0-562e4201df83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.244513519Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.245532798Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/182f567c-9d1f-4236-9022-215d43ec3d97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.245553594Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.246456825Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/54c16842-9706-4610-8469-e00099a892d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.246475249Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:28.995929 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.996367003Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:28.996409910Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:29.009080490Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/1ff835ab-a0de-4520-8c6f-3c2cf3b6a69c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:29.009100261Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e1744c4\x2d5637\x2d455f\x2db11a\x2d502b3eb68026.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c99529ea\x2dcadb\x2d4781\x2d9fda\x2dcb7b61984666.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7fa0871b\x2dd45f\x2d4c98\x2da0b2\x2ddddb2ef7d718.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f8d1169d\x2ddc79\x2d4a29\x2d86be\x2dd5913f2b1b14.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-173ad0252165c3aabc25c132a087f5cde3a7bf9f15223e8ae9fbac5c1fc502b7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-96163bb9\x2d0024\x2d44d4\x2dacca\x2d6ff0163677dc.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8c6d3adb96a3a9187b89604ef16cc1105e10c8d82d58baf9adfb7470f9a29221-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cbe8f762deac34d16a41495c6239a219ead0810cc9b0962da62cfbb077f9c508-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bad7175ed24a2fa625007fa90acde91e0b2f5434b65e5604d141236288a8b99e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-097637385f1c1f54cfa3bbc0952d3282d80f39f6911def8f7cd47653a9630c60-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:17:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:32.996690 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:17:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:32.997199 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:17:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:33.995595 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:17:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:33.995940247Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:33.995981249Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:34.008022913Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dc2f9ea4-cef6-4088-9427-2246c96a54ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:34.008223926Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:17:47.997634 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:17:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:17:47.998119 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:17:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:49.022722486Z" level=info msg="NetworkStart: stopping network for sandbox 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:17:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:49.022940499Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c9f13c71-558a-4644-bce7-9ddfee03b93c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:17:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:49.022966077Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:17:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:49.022974389Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:17:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:49.022983032Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:17:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:17:58.142742068Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.020903561Z" level=info msg="NetworkStart: stopping network for sandbox 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.021042746Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/1c8fc595-7b9f-45f1-8e3d-aba2f84f6cf7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.021063441Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.021069929Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.021076047Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:01.996907 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.997721189Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=bd86854e-4ef6-4bb4-a8f8-958e8a370fa6 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.997875302Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bd86854e-4ef6-4bb4-a8f8-958e8a370fa6 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.998646428Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=8a0297ce-e60a-4a30-9777-1916c56724af name=/runtime.v1.ImageService/ImageStatus Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.998759301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8a0297ce-e60a-4a30-9777-1916c56724af name=/runtime.v1.ImageService/ImageStatus Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.999520593Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9262149a-c2ed-441e-8d00-3bcde37ed426 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:18:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:01.999584231Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope. -- Subject: Unit crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29. -- Subject: Unit crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.115020966Z" level=info msg="Created container 89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=9262149a-c2ed-441e-8d00-3bcde37ed426 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.115539203Z" level=info msg="Starting container: 89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" id=507f4bb2-21a1-47bb-ad1d-563c40f9574d name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.134892821Z" level=info msg="Started container" PID=127232 containerID=89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=507f4bb2-21a1-47bb-ad1d-563c40f9574d name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.139619373Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.150069067Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.150091562Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.150102934Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.159327619Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.159350230Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.159363226Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.167847527Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.167863118Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.167872649Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.176653810Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.176669806Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.176678317Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.184716334Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:18:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:02.184732051Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:18:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:02.275142 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/190.log" Jan 23 17:18:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:02.276727 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29} Jan 23 17:18:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:02.276950 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:18:02 hub-master-0.workload.bos2.lab conmon[127211]: conmon 89160cc1619d68f21304 : container 127232 exited with status 1 Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has successfully entered the 'dead' state. Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope: Consumed 573ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope completed and consumed the indicated resources. Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope has successfully entered the 'dead' state. Jan 23 17:18:02 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29.scope completed and consumed the indicated resources. Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.280189 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/191.log" Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.280705 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/190.log" Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.281820 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" exitCode=1 Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.281842 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29} Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.281862 8631 scope.go:115] "RemoveContainer" containerID="c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:03.282790 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:03.282930513Z" level=info msg="Removing container: c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79" id=03ddff95-08a2-4c00-a8d4-51331752bec1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:18:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:03.283354 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:03 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-c753bf654be704c7d1d2cc577baa3ab90cef6fbebdde4ac28806cba888d8f308-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-c753bf654be704c7d1d2cc577baa3ab90cef6fbebdde4ac28806cba888d8f308-merged.mount has successfully entered the 'dead' state. Jan 23 17:18:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:03.303215113Z" level=info msg="Removed container c61028a2d66e7eda85e837042ddacf3ea8f3d8b390e846349fd54a4fe5207e79: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=03ddff95-08a2-4c00-a8d4-51331752bec1 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.028763324Z" level=info msg="NetworkStart: stopping network for sandbox 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.028803110Z" level=info msg="NetworkStart: stopping network for sandbox 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029089607Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/518f2cd5-5e38-4b39-86ea-d1663d436e26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029111979Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029119927Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029127736Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029176893Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/c51c7cc1-ac8f-4d76-8730-a1fd2fde749b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029199408Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029210080Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:04.029218551Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:04.285182 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/191.log" Jan 23 17:18:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:04.287326 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:04.287817 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:08.021367090Z" level=info msg="NetworkStart: stopping network for sandbox 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:08.021507840Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/44ddd100-eeed-4ca6-9473-4b3c0e79bec2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:08.021530458Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:08.021537523Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:08.021544010Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.025050176Z" level=info msg="NetworkStart: stopping network for sandbox 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.025188332Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2ec07138-56c6-41a2-978f-2faaf25aa53a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.025216144Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.025223266Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.025230227Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.026550216Z" level=info msg="NetworkStart: stopping network for sandbox a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.026659039Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/83ad8eef-b629-410a-9092-64681b5c182e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.026679714Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.026686372Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:09.026693354Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:11.020309286Z" level=info msg="NetworkStart: stopping network for sandbox 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:11.020469159Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b8fecfe4-1ef8-4743-8a0a-5e742437f398 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:11.020495160Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:11.020502741Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:11.020510863Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:12.024519217Z" level=info msg="NetworkStart: stopping network for sandbox b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:12.024657657Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/50fa198e-7eb5-419b-8aad-19bd6864dcdc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:12.024678941Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:12.024685564Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:12.024693053Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.023260220Z" level=info msg="NetworkStart: stopping network for sandbox e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.023446930Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/4e012555-2095-4fde-a124-c9d33358d375 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.023472578Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.023479604Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.023487201Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.256800238Z" level=info msg="NetworkStart: stopping network for sandbox 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.256928400Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/a6a9d844-9c76-4119-946c-b4fb84983996 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.256949988Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.256956387Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.256962453Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257356250Z" level=info msg="NetworkStart: stopping network for sandbox 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257487363Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f3e69e3c-ac1d-4c12-81f0-562e4201df83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257506503Z" level=info msg="NetworkStart: stopping network for sandbox 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257514957Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257602542Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257610722Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257626597Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/54f2c298-5e62-494a-ad44-426771e051e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257648866Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257655829Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.257662061Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.259115061Z" level=info msg="NetworkStart: stopping network for sandbox 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.259239561Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/182f567c-9d1f-4236-9022-215d43ec3d97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.259263000Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.259270624Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.259277552Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.260752832Z" level=info msg="NetworkStart: stopping network for sandbox b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.260874355Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/54c16842-9706-4610-8469-e00099a892d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.260898050Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.260906358Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:13.260913494Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:14.023386331Z" level=info msg="NetworkStart: stopping network for sandbox b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:14.023524516Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/1ff835ab-a0de-4520-8c6f-3c2cf3b6a69c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:14.023549252Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:14.023557282Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:14.023565388Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:18.996514 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:18.997232 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:19.021428319Z" level=info msg="NetworkStart: stopping network for sandbox 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:19.021568491Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/dc2f9ea4-cef6-4088-9427-2246c96a54ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:19.021591016Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:18:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:19.021597705Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:18:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:19.021604141Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893526 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893545 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893551 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893558 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893566 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893575 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:27.893583 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:18:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:28.142461158Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:18:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:32.996713 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:32.997232 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.034080691Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.034294068Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount has successfully entered the 'dead' state. Jan 23 17:18:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount has successfully entered the 'dead' state. Jan 23 17:18:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c9f13c71\x2d558a\x2d4644\x2dbce7\x2d9ddfee03b93c.mount has successfully entered the 'dead' state. Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.080355252Z" level=info msg="runSandbox: deleting pod ID 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac from idIndex" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.080386433Z" level=info msg="runSandbox: removing pod sandbox 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.080402892Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.080417509Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.100465691Z" level=info msg="runSandbox: removing pod sandbox from storage: 0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.103939250Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:34.103959000Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4bc32c49-d644-481d-b87b-4ec472b7ed35 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:34.104163 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:34.104212 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:34.104234 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:34.104281 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0a06c292004a71e317213df6286fa115501a889873f6cc3427a48e58077d3bac): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1186] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1190] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1191] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1370] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1371] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1382] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1384] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1385] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1386] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1389] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:18:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494318.1393] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:18:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494320.1564] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:43.996955 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:43.997638 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.033030329Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.033073215Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount has successfully entered the 'dead' state. Jan 23 17:18:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount has successfully entered the 'dead' state. Jan 23 17:18:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1c8fc595\x2d7b9f\x2d45f1\x2d8e3d\x2daba2f84f6cf7.mount has successfully entered the 'dead' state. Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.071286497Z" level=info msg="runSandbox: deleting pod ID 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978 from idIndex" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.071309856Z" level=info msg="runSandbox: removing pod sandbox 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.071323930Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.071335410Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.084460340Z" level=info msg="runSandbox: removing pod sandbox from storage: 0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.087987503Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:46.088004341Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=303bd340-bc3f-4cd9-909e-e6117346cc93 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:46.088184 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:46.088232 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:18:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:46.088253 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:18:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:46.088293 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0ea89f8fba6ee50487f5bb9e33dc3becca7f804a0ac7533ec63d5c62813cf978): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:18:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:47.996673 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:47.997062908Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:47.997103918Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:48.009248293Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/3ec57c0a-6da6-4948-9a55-c69f095d76f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:48.009267559Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.040893046Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.041143533Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.041240527Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.041298576Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c51c7cc1\x2dac8f\x2d4d76\x2d8730\x2da1fd2fde749b.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-518f2cd5\x2d5e38\x2d4b39\x2d86ea\x2dd1663d436e26.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081312448Z" level=info msg="runSandbox: deleting pod ID 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff from idIndex" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081345058Z" level=info msg="runSandbox: removing pod sandbox 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081366952Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081390679Z" level=info msg="runSandbox: unmounting shmPath for sandbox 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081316468Z" level=info msg="runSandbox: deleting pod ID 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd from idIndex" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081427920Z" level=info msg="runSandbox: removing pod sandbox 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081440792Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.081453955Z" level=info msg="runSandbox: unmounting shmPath for sandbox 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.101487626Z" level=info msg="runSandbox: removing pod sandbox from storage: 604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.101490526Z" level=info msg="runSandbox: removing pod sandbox from storage: 546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.104730596Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.104751180Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=3504bd58-3a17-47d3-9a8f-766577002ad7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.104933 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.104978 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.105013 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.105062 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(604e12b573fa202872fdd03a06b99f342c55e8d97441cfcebe29328da87fbafd): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.108062936Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:49.108082951Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=e1473a74-3ab1-4fc7-93c9-285766c23035 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.108240 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.108282 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.108305 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:49.108355 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(546283d6a3cefdb775575003e28710bda899a71725e9d4ace92916e54958f2ff): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.032094986Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.032134286Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount has successfully entered the 'dead' state. Jan 23 17:18:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount has successfully entered the 'dead' state. Jan 23 17:18:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-44ddd100\x2deeed\x2d4ca6\x2d9473\x2d4b3c0e79bec2.mount has successfully entered the 'dead' state. Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.076319606Z" level=info msg="runSandbox: deleting pod ID 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426 from idIndex" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.076347040Z" level=info msg="runSandbox: removing pod sandbox 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.076362044Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.076374549Z" level=info msg="runSandbox: unmounting shmPath for sandbox 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.087431424Z" level=info msg="runSandbox: removing pod sandbox from storage: 994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.090929301Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:53.090946238Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b3d79049-3342-42d8-87dc-fd1e94a5eb0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:53.091168 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:53.091222 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:53.091243 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:18:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:53.091289 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(994f78d30d0239c605bdc786444673efad97761cc15633a6cb7d380387a43426): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.035777220Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.035813151Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.037914754Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.037943830Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2ec07138\x2d56c6\x2d41a2\x2d978f\x2d2faaf25aa53a.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-83ad8eef\x2db629\x2d410a\x2d9092\x2d64681b5c182e.mount has successfully entered the 'dead' state. Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088333258Z" level=info msg="runSandbox: deleting pod ID a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d from idIndex" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088360755Z" level=info msg="runSandbox: removing pod sandbox a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088373654Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088386701Z" level=info msg="runSandbox: unmounting shmPath for sandbox a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088334060Z" level=info msg="runSandbox: deleting pod ID 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90 from idIndex" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088442063Z" level=info msg="runSandbox: removing pod sandbox 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088454181Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.088466192Z" level=info msg="runSandbox: unmounting shmPath for sandbox 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.108478960Z" level=info msg="runSandbox: removing pod sandbox from storage: 275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.108503945Z" level=info msg="runSandbox: removing pod sandbox from storage: a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.111818758Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.111836808Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=09cdede1-5d19-4de7-8257-832b6807c1b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.112063 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.112106 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.112128 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.112177 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.114735491Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:54.114753228Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=c442f6ef-ae84-4a50-8369-570528ec5350 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.114961 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.114995 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.115015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:18:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:54.115056 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:18:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-275f440a9c5f83e13492f99a055b309a342a1ecad80cbc83f004494c6e778e90-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a8bfec1189984096f6fd8632acbdea604a2b6eefc3ff6326fef73e1de6f7309d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.030654790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.030700752Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount has successfully entered the 'dead' state. Jan 23 17:18:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount has successfully entered the 'dead' state. Jan 23 17:18:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b8fecfe4\x2d1ef8\x2d4743\x2d8a0a\x2d5e742437f398.mount has successfully entered the 'dead' state. Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.078423325Z" level=info msg="runSandbox: deleting pod ID 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046 from idIndex" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.078453285Z" level=info msg="runSandbox: removing pod sandbox 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.078470058Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.078486138Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.091457204Z" level=info msg="runSandbox: removing pod sandbox from storage: 4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.095087326Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:56.095110739Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=aec1e40f-64b9-47c2-a0ee-52a509841a9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:56.095250 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:56.095294 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:56.095319 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:56.095362 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4134424a2ff314897a16f81077b63cf691f1747139271af38174c452bdd4c046): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:56.996842 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:18:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:56.997351 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.036742080Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.036778780Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount has successfully entered the 'dead' state. Jan 23 17:18:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount has successfully entered the 'dead' state. Jan 23 17:18:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-50fa198e\x2d7eb5\x2d419b\x2d8aad\x2d19bd6864dcdc.mount has successfully entered the 'dead' state. Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.071302753Z" level=info msg="runSandbox: deleting pod ID b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4 from idIndex" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.071329558Z" level=info msg="runSandbox: removing pod sandbox b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.071343516Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.071357108Z" level=info msg="runSandbox: unmounting shmPath for sandbox b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.084433103Z" level=info msg="runSandbox: removing pod sandbox from storage: b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.087650700Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.087669374Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=cc0a77c5-6f2f-4590-b277-36afe8f5bbcf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:57.087903 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:57.087944 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:57.087969 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:57.088015 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(b11bf5b09bd0743b86fb774616d920d8de1d58f0f69c98cf1a8e7779e95250f4): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:18:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:57.997572 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.997924029Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:57.997956948Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.012697248Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c40af15a-8cec-485c-a69b-74b03cc6040d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.012729009Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.034705925Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.034746098Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e012555\x2d2095\x2d4fde\x2da124\x2dc9d33358d375.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.073308067Z" level=info msg="runSandbox: deleting pod ID e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804 from idIndex" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.073332788Z" level=info msg="runSandbox: removing pod sandbox e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.073347926Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.073361331Z" level=info msg="runSandbox: unmounting shmPath for sandbox e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.089471606Z" level=info msg="runSandbox: removing pod sandbox from storage: e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.092762347Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.092783213Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=636f3b3a-78e7-4925-baac-f9b815e73166 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.093005 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.093048 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.093071 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.093121 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(e6332203a1ac1e882a6b85ef10a6fbbdc518303dfe172f33600813bc7418a804): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.143583416Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.267882097Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.267909627Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.267925079Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.267954928Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.269102538Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.269134581Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.269805892Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.269831562Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.270937284Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.270962195Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount has successfully entered the 'dead' state. Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310336111Z" level=info msg="runSandbox: deleting pod ID 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842 from idIndex" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310365999Z" level=info msg="runSandbox: removing pod sandbox 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310388179Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310401050Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310336220Z" level=info msg="runSandbox: deleting pod ID 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d from idIndex" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310446886Z" level=info msg="runSandbox: removing pod sandbox 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310459499Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.310472044Z" level=info msg="runSandbox: unmounting shmPath for sandbox 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.311278672Z" level=info msg="runSandbox: deleting pod ID 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2 from idIndex" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.311302419Z" level=info msg="runSandbox: removing pod sandbox 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.311314535Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.311324554Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314297244Z" level=info msg="runSandbox: deleting pod ID 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162 from idIndex" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314322787Z" level=info msg="runSandbox: removing pod sandbox 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314336374Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314347071Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314300982Z" level=info msg="runSandbox: deleting pod ID b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8 from idIndex" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314407938Z" level=info msg="runSandbox: removing pod sandbox b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314420576Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.314431998Z" level=info msg="runSandbox: unmounting shmPath for sandbox b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.322441113Z" level=info msg="runSandbox: removing pod sandbox from storage: 094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.322448808Z" level=info msg="runSandbox: removing pod sandbox from storage: 84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.323500860Z" level=info msg="runSandbox: removing pod sandbox from storage: 0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.325685545Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.325702556Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=76a294eb-be46-4010-8884-8f4342b3d23f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.325968 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.326004 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.326025 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.326065 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.327434985Z" level=info msg="runSandbox: removing pod sandbox from storage: 6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.327465231Z" level=info msg="runSandbox: removing pod sandbox from storage: b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.328674202Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.328691906Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3f3eb629-f056-4372-919e-6191ae752ef7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.328899 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.328930 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.328949 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.328985 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.331611115Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.331628666Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=fedc9b0d-cb82-49e4-bd4c-3afe57e346ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.331829 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.331859 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.331879 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.331916 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.334585197Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.334605023Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=0c1dea7f-727c-43e0-805e-9c5ff6756be6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.334839 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.334873 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.334894 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.334931 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.337692895Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.337715500Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=00e14f9b-ae46-481c-b49a-a05e4e74f35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.337913 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.337946 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.337965 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:58.338003 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:58.389855 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:58.389999 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:58.390101 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390126826Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390154963Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:58.390258 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:18:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:18:58.390337 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390253517Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390280059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390335133Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390359554Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390455207Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390478728Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390586690Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.390605385Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.419254412Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d34d4951-8e21-44ee-8c59-25ce7cf8070c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.419277590Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.420466292Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/bd59bca4-6897-410e-bcc3-5a03be9d8af4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.420488203Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.421782688Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c5738f5f-33fe-453f-88c9-edc926566b5b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.421802010Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.422879907Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/97f5bf83-cebb-417d-a7b3-531e3c426a7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.422899503Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.423658943Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f0907bc3-d838-4263-8724-d98df59acdb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:18:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:58.423679759Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.033940648Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.033975274Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-54c16842\x2d9706\x2d4610\x2d8469\x2de00099a892d9.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-182f567c\x2d9d1f\x2d4236\x2d9022\x2d215d43ec3d97.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f3e69e3c\x2dac1d\x2d4c12\x2d81f0\x2d562e4201df83.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-54f2c298\x2d5e62\x2d494a\x2dad44\x2d426771e051e8.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a6a9d844\x2d9c76\x2d4119\x2d946c\x2db4fb84983996.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b1305c2f046d06a31a7c7fea12432639fbe2feea7baf7958181f7b884134aca8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-84c28d87faf4ebcd45b283f8b07508970073d405210b6f2cdae1ab5a0ee82842-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0c9efb6671191daf62a4d49a70727f9ed4c35fce743d662b4d6c7ea0d56534f2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6bf4b418058dbbde8e133ad5016356972adcd60fdc01e04e56296c045b30a162-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-094d97e395c5e2c629577ce2bfff3e09c9297b74389316400acdafb17a7fa34d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1ff835ab\x2da0de\x2d4520\x2d8c6f\x2d3c2cf3b6a69c.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.085275327Z" level=info msg="runSandbox: deleting pod ID b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3 from idIndex" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.085301992Z" level=info msg="runSandbox: removing pod sandbox b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.085318602Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.085331994Z" level=info msg="runSandbox: unmounting shmPath for sandbox b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.101447724Z" level=info msg="runSandbox: removing pod sandbox from storage: b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.104320398Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:18:59.104338295Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=cd7a4e88-28d8-4cbf-b354-24f09e45eafe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:18:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:59.104569 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:18:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:59.104618 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:59.104642 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:18:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:18:59.104696 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b9a26b4530435cf66fb024e07afc2d709df31f12a3fa910e55ee479471d112e3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:19:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:01.996538 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:19:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:01.996930578Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:01.997161760Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:02.009043421Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7cd5e848-4340-4a91-96bc-edeb4c193c6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:02.009063015Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:03.995839 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:03.996129497Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:03.996167965Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.006991328Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/43c95f23-5316-4607-bc8f-be965262ea3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.007012082Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.033792508Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.033829424Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount has successfully entered the 'dead' state. Jan 23 17:19:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount has successfully entered the 'dead' state. Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.082300698Z" level=info msg="runSandbox: deleting pod ID 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e from idIndex" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.082329078Z" level=info msg="runSandbox: removing pod sandbox 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.082344591Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.082357635Z" level=info msg="runSandbox: unmounting shmPath for sandbox 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.098455609Z" level=info msg="runSandbox: removing pod sandbox from storage: 16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.101119863Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.101140138Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=731520db-a697-4f23-a8d1-b1e6f5ca1f16 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:04.101374 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:04.101415 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:04.101438 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:04.101487 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:19:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dc2f9ea4\x2dcef6\x2d4088\x2d9427\x2d2246c96a54ff.mount has successfully entered the 'dead' state. Jan 23 17:19:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-16a613f5bf6ce6eddd3218cf8ce6dc8c65daa0d394e2213e54a24bbd26ebb18e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:19:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:04.995976 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.996331219Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:04.996375077Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:05.007237712Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c2c7aa91-1939-4fc1-a872-d7b099f5975b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:05.007265145Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:06.995883 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:19:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:06.996218703Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:06.996257356Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:07.007971425Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ff10b346-bf44-42d4-8792-51ed322da685 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:07.007989966Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:08.995454 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:19:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:08.995538 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:19:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:08.995929788Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:08.995969756Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:08.996015758Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:08.996044396Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.016899076Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/d36c9841-68a1-4e9c-8640-43a9b3e0563e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.016925603Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.016899213Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4a8287a0-2755-45c7-a9d7-a292ba7d0a13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.017049935Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:09.995821 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.996199108Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:09.996243384Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:10.007460443Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/147fb098-1ea3-4a88-8827-e5c4493de8d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:10.007481606Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:10.996143 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:19:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:10.996478465Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:10.996511109Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:10.996829 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:19:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:10.997331 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:19:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:11.008621045Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/044e1bb1-9b94-4a71-a273-57c78518442a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:11.008640384Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:11.995950 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:19:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:11.996320495Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:11.996361937Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:12.007462223Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/90da51c3-248a-47b9-b2ac-151c7944e056 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:12.007490480Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:17.996934 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:19:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:17.997288979Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:17.997329728Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:19:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:18.008256837Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/59c2746b-8f67-4513-bad7-f0d274422a76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:18.008276718Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:24.996925 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:19:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:24.997456 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894451 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894470 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894478 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894484 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894492 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894498 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:27.894506 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:19:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:28.143582886Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:19:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:33.023638159Z" level=info msg="NetworkStart: stopping network for sandbox a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:33.023788823Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/3ec57c0a-6da6-4948-9a55-c69f095d76f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:33.023813809Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:33.023821274Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:33.023828571Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:39.996667 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:19:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:39.997307 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.026349806Z" level=info msg="NetworkStart: stopping network for sandbox 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.026540180Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/c40af15a-8cec-485c-a69b-74b03cc6040d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.026566001Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.026574761Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.026581787Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433246798Z" level=info msg="NetworkStart: stopping network for sandbox 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433409770Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d34d4951-8e21-44ee-8c59-25ce7cf8070c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433432296Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433439958Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433446307Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433769189Z" level=info msg="NetworkStart: stopping network for sandbox 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433887234Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/bd59bca4-6897-410e-bcc3-5a03be9d8af4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433909045Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433916177Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.433922228Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.435022046Z" level=info msg="NetworkStart: stopping network for sandbox 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.435120257Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/c5738f5f-33fe-453f-88c9-edc926566b5b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.435141950Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.435148669Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.435155015Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436761249Z" level=info msg="NetworkStart: stopping network for sandbox c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436873462Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/97f5bf83-cebb-417d-a7b3-531e3c426a7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436896637Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436903852Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436910658Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.436989085Z" level=info msg="NetworkStart: stopping network for sandbox 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.437120320Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f0907bc3-d838-4263-8724-d98df59acdb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.437143792Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.437154551Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:43.437161442Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:47.021617242Z" level=info msg="NetworkStart: stopping network for sandbox b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:47.021775288Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/7cd5e848-4340-4a91-96bc-edeb4c193c6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:47.021801051Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:47.021808432Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:47.021816020Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:49.021668105Z" level=info msg="NetworkStart: stopping network for sandbox 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:49.021814479Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/43c95f23-5316-4607-bc8f-be965262ea3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:49.021835988Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:49.021843124Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:49.021849018Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:50.019953588Z" level=info msg="NetworkStart: stopping network for sandbox 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:50.020104182Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c2c7aa91-1939-4fc1-a872-d7b099f5975b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:50.020130055Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:50.020137260Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:50.020144425Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:19:51.996282 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:19:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:19:51.996853 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:19:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:52.020581459Z" level=info msg="NetworkStart: stopping network for sandbox d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:52.020721694Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ff10b346-bf44-42d4-8792-51ed322da685 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:52.020743328Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:52.020749737Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:52.020755388Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030261255Z" level=info msg="NetworkStart: stopping network for sandbox 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030267366Z" level=info msg="NetworkStart: stopping network for sandbox dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030597106Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/d36c9841-68a1-4e9c-8640-43a9b3e0563e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030619834Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030627659Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030634420Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030664183Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4a8287a0-2755-45c7-a9d7-a292ba7d0a13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030684940Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030692427Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:54.030698339Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:55.021509130Z" level=info msg="NetworkStart: stopping network for sandbox 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:55.021650823Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/147fb098-1ea3-4a88-8827-e5c4493de8d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:55.021674229Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:55.021680884Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:55.021687559Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:56.021651800Z" level=info msg="NetworkStart: stopping network for sandbox 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:56.021787758Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/044e1bb1-9b94-4a71-a273-57c78518442a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:56.021811831Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:56.021819269Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:56.021825363Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:57.019625706Z" level=info msg="NetworkStart: stopping network for sandbox da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:19:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:57.019763055Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/90da51c3-248a-47b9-b2ac-151c7944e056 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:19:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:57.019784132Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:19:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:57.019790311Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:19:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:57.019797303Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:19:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:19:58.142906478Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:20:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:03.022857660Z" level=info msg="NetworkStart: stopping network for sandbox 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:03.023003348Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/59c2746b-8f67-4513-bad7-f0d274422a76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:03.023027264Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:20:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:03.023033847Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:20:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:03.023039545Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:06.996739 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:20:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:06.997371 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494408.1177] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494408.1182] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494408.1183] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494408.1422] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:20:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494408.1424] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.035476808Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.035512468Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount has successfully entered the 'dead' state. Jan 23 17:20:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount has successfully entered the 'dead' state. Jan 23 17:20:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ec57c0a\x2d6da6\x2d4948\x2d9a55\x2dc69f095d76f5.mount has successfully entered the 'dead' state. Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.071323781Z" level=info msg="runSandbox: deleting pod ID a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef from idIndex" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.071346847Z" level=info msg="runSandbox: removing pod sandbox a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.071359945Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.071372954Z" level=info msg="runSandbox: unmounting shmPath for sandbox a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.083462850Z" level=info msg="runSandbox: removing pod sandbox from storage: a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.086690087Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:18.086709225Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=fb9dd745-a390-4225-b1d6-f949df93973b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:18.086917 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:18.086956 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:18.086980 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:18.087026 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(a84d3ec002da782da4870cf35f154acdbb59a7903d5f1b6a94cda506f52452ef): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:20:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:19.996396 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:20:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:19.996901 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895050 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895070 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895076 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895083 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895089 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895095 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:27.895102 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:20:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:27.899410369Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=05a2e199-6bdc-4fa5-9f66-06689fec9c46 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:20:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:27.899721618Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=05a2e199-6bdc-4fa5-9f66-06689fec9c46 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.037744340Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.037781375Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c40af15a\x2d8cec\x2d485c\x2da69b\x2d74b03cc6040d.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.073311642Z" level=info msg="runSandbox: deleting pod ID 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb from idIndex" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.073338858Z" level=info msg="runSandbox: removing pod sandbox 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.073354532Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.073370135Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.089462887Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.092992879Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.093010773Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f7db1b4f-4cba-4e33-8234-e1470de6df59 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.093251 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.093293 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.093314 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.093356 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(9d5c2abcf298c2f5499843b9beff29f7fa0b7aed16d26746d40211a386170ffb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.142244314Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445553860Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445589498Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445587677Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445682151Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445701491Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.445732948Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.447943924Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.447975812Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.448228812Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.448257190Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount has successfully entered the 'dead' state. Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492347963Z" level=info msg="runSandbox: deleting pod ID 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9 from idIndex" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492381059Z" level=info msg="runSandbox: removing pod sandbox 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492400634Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492415147Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492349925Z" level=info msg="runSandbox: deleting pod ID 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4 from idIndex" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492354100Z" level=info msg="runSandbox: deleting pod ID 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960 from idIndex" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492471465Z" level=info msg="runSandbox: removing pod sandbox 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492485959Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492497996Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492448427Z" level=info msg="runSandbox: removing pod sandbox 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492570502Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.492587151Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.497278532Z" level=info msg="runSandbox: deleting pod ID c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6 from idIndex" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.497305936Z" level=info msg="runSandbox: removing pod sandbox c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.497318574Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.497329459Z" level=info msg="runSandbox: unmounting shmPath for sandbox c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.498275346Z" level=info msg="runSandbox: deleting pod ID 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726 from idIndex" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.498303410Z" level=info msg="runSandbox: removing pod sandbox 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.498315365Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.498326426Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.505495080Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.512760087Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.512784105Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f5b0b8e6-9a96-4289-bd08-516f08069fcc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.513442042Z" level=info msg="runSandbox: removing pod sandbox from storage: 89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.513322 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.513496 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.513479867Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.513518 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.513571 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.516639893Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.516657985Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=4a7b141d-cd90-4c84-a6aa-ed8d37e495d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.516829 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.516869 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.516890 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.516933 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.517431122Z" level=info msg="runSandbox: removing pod sandbox from storage: 9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.518427056Z" level=info msg="runSandbox: removing pod sandbox from storage: c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.519859128Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.519879463Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=872295ea-464d-480d-a766-1eaf2caec3e8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.520087 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.520123 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.520143 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.520181 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.522898723Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.522917327Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=fd3759bb-15a3-4543-85bd-322e55ffd6e2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.523173 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.523210 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.523237 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.523286 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.525914487Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.525933462Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=d79c179c-1329-455f-b1b9-3d66612f70b4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.526108 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.526143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.526164 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:28.526212 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:28.569561 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:28.569691 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:28.569825 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:28.569904 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.569905608Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.569936457Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:28.569986 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.569912986Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570017091Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570027601Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570058245Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570258980Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570278821Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570294706Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.570304790Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.596396960Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0825d8ae-8ced-44d8-9900-a5ca36daf1cf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.596416584Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.597306488Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/314c6e7d-7747-46fe-bc7b-57dbca25c2a1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.597324974Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.598335502Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/3d836407-72ad-4ed6-b146-f9f992c0c51a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.598353174Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.599099533Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d000539c-3943-4213-b3b2-5b1e62b31d95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.599120032Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.600262361Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/6931d169-1d03-493e-94c0-3e624bf76ecb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:28.600282421Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f0907bc3\x2dd838\x2d4263\x2d8724\x2dd98df59acdb8.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-97f5bf83\x2dcebb\x2d417d\x2da7b3\x2d531e3c426a7f.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c5738f5f\x2d33fe\x2d453f\x2d88c9\x2dedc926566b5b.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bd59bca4\x2d6897\x2d410e\x2dbcc3\x2d5a03be9d8af4.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d34d4951\x2d8e21\x2d44ee\x2d8c59\x2d25ce7cf8070c.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-89cb1ad80902d91201ad23a2f6d4fbc601ab652689aa9a026f1d5572c70013a9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9944c0f0dfdb9635f2078540cf0e7901f79e07b2ebe37b395b9f1b9250ae7726-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c4a70b0e5824911c667524c095a53608729f8bdd5d12f3cf7255e36ba4a087d6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2b03fe08f68a91018cd7f9fd7f8a088489ee2cc0a04d33f35687f2dd4cc9feb4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8d68f39ddd4f6cd74055bf303a04b054c701458d9c19ebbcca16df788d264960-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.032965807Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.033006510Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount has successfully entered the 'dead' state. Jan 23 17:20:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount has successfully entered the 'dead' state. Jan 23 17:20:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7cd5e848\x2d4340\x2d4a91\x2d96bc\x2dedeb4c193c6e.mount has successfully entered the 'dead' state. Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.078310827Z" level=info msg="runSandbox: deleting pod ID b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888 from idIndex" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.078336078Z" level=info msg="runSandbox: removing pod sandbox b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.078349922Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.078362552Z" level=info msg="runSandbox: unmounting shmPath for sandbox b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.090464871Z" level=info msg="runSandbox: removing pod sandbox from storage: b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.093719619Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.093740623Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=a42a5b62-0d97-4541-8627-1d33b7a6ac05 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:32.093927 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:32.093972 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:32.093997 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:32.094046 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b68834312763bc13ac32f4c2c713900f937195f5c3cf5484fc282a9b6bf40888): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:20:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:32.996310 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.996649621Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:32.996689687Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:33.008839645Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5682b084-e414-4fc9-9917-d4903313146f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:33.008859194Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:33.996288 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:20:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:33.996822 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.033651536Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.033887431Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount has successfully entered the 'dead' state. Jan 23 17:20:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount has successfully entered the 'dead' state. Jan 23 17:20:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-43c95f23\x2d5316\x2d4607\x2dbc8f\x2dbe965262ea3b.mount has successfully entered the 'dead' state. Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.068284379Z" level=info msg="runSandbox: deleting pod ID 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e from idIndex" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.068309281Z" level=info msg="runSandbox: removing pod sandbox 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.068323654Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.068335503Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.079478173Z" level=info msg="runSandbox: removing pod sandbox from storage: 6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.082654373Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:34.082674340Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5afea505-b7e7-4324-9de0-dbf199b50560 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:34.082859 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:34.082896 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:20:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:34.082929 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:20:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:34.082968 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f00e388430ee40b252650150e9296b57a778d4bea373b4962d1d91de29bf13e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.030719673Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.030762045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount has successfully entered the 'dead' state. Jan 23 17:20:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount has successfully entered the 'dead' state. Jan 23 17:20:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c2c7aa91\x2d1939\x2d4fc1\x2da872\x2dd7b099f5975b.mount has successfully entered the 'dead' state. Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.070276825Z" level=info msg="runSandbox: deleting pod ID 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7 from idIndex" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.070303435Z" level=info msg="runSandbox: removing pod sandbox 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.070320496Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.070335463Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.090456461Z" level=info msg="runSandbox: removing pod sandbox from storage: 2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.093835605Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:35.093854679Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3b50914e-5a53-461b-adbe-6df07c0035aa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:35.094069 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:35.094106 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:20:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:35.094127 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:20:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:35.094166 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(2f40680b8a35ebd95ecb2a03118a6a2ea152a117e134366075d9d8057c8e69d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.032827133Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.032865525Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount has successfully entered the 'dead' state. Jan 23 17:20:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount has successfully entered the 'dead' state. Jan 23 17:20:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ff10b346\x2dbf44\x2d42d4\x2d8792\x2d51ed322da685.mount has successfully entered the 'dead' state. Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.066306263Z" level=info msg="runSandbox: deleting pod ID d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c from idIndex" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.066329803Z" level=info msg="runSandbox: removing pod sandbox d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.066343421Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.066355018Z" level=info msg="runSandbox: unmounting shmPath for sandbox d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.082426942Z" level=info msg="runSandbox: removing pod sandbox from storage: d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.085929732Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:37.085947627Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=809b05ee-b261-438a-8317-52f09815f0fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:37.086177 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:37.086234 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:37.086256 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:37.086305 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d9ef7c93579a287166205fb5d29bbd405d39d459a7a371387e394680f8665f4c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:20:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:38.996356 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:38.996662624Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:38.996703357Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.009626826Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f500e31f-ac6a-463b-92df-1f369d2688d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.009647381Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.041348006Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.041378666Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.041696848Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.041728182Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount has successfully entered the 'dead' state. Jan 23 17:20:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount has successfully entered the 'dead' state. Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.075308319Z" level=info msg="runSandbox: deleting pod ID 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b from idIndex" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.075332493Z" level=info msg="runSandbox: removing pod sandbox 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.075347411Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.075359507Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.079309065Z" level=info msg="runSandbox: deleting pod ID dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa from idIndex" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.079333443Z" level=info msg="runSandbox: removing pod sandbox dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.079345555Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.079356328Z" level=info msg="runSandbox: unmounting shmPath for sandbox dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.091442843Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.094374234Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.094391316Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=49b2b081-7809-4e51-98e8-c3e4cb01d91c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.094636 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.094681 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.094704 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.094755 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.099404246Z" level=info msg="runSandbox: removing pod sandbox from storage: dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.105649002Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:39.105683633Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=c6042b08-ac84-41ac-acb4-25034174fe04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.105926 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.105966 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.105989 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:20:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:39.106035 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4a8287a0\x2d2755\x2d45c7\x2da9d7\x2da292ba7d0a13.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d36c9841\x2d68a1\x2d4e9c\x2d8640\x2d43a9b3e0563e.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dab173a94c93357cf65928f9e40d774b6d08a7cee84b58e6da908dbcd78de3fa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8d03a2392af55a14e3735b7d60454fc57bcbde6e740930f54daec1f8c4dece3b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.032872430Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.032905468Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-147fb098\x2d1ea3\x2d4a88\x2d8827\x2de5c4493de8d1.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.063284689Z" level=info msg="runSandbox: deleting pod ID 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70 from idIndex" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.063307567Z" level=info msg="runSandbox: removing pod sandbox 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.063320809Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.063332363Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.083425623Z" level=info msg="runSandbox: removing pod sandbox from storage: 46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.087178194Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:40.087196447Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=a0db956a-68aa-47fc-a8ce-505626e051f3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:40.087436 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:40.087475 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:20:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:40.087499 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:20:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:40.087540 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(46d92ada7f88e209ef5a1fdcf166b4d98cb1f13abc7d2ec59c85c90f86859d70): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.032795756Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.032832314Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount has successfully entered the 'dead' state. Jan 23 17:20:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount has successfully entered the 'dead' state. Jan 23 17:20:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-044e1bb1\x2d9b94\x2d4a71\x2da273\x2d57c78518442a.mount has successfully entered the 'dead' state. Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.076306146Z" level=info msg="runSandbox: deleting pod ID 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2 from idIndex" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.076330104Z" level=info msg="runSandbox: removing pod sandbox 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.076343596Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.076354692Z" level=info msg="runSandbox: unmounting shmPath for sandbox 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.096438883Z" level=info msg="runSandbox: removing pod sandbox from storage: 94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.099884795Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:41.099903734Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=8df04c91-d048-4b0a-8838-71e95bba4928 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:41.100115 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:41.100163 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:41.100188 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:41.100244 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(94fc4ca479fd62b7e0c0502a479ba5f51c2903b9f4fd88c3d296cb5f543b2ef2): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.031301397Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.031341714Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount has successfully entered the 'dead' state. Jan 23 17:20:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount has successfully entered the 'dead' state. Jan 23 17:20:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-90da51c3\x2d248a\x2d47b9\x2db2ac\x2d151c7944e056.mount has successfully entered the 'dead' state. Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.076304090Z" level=info msg="runSandbox: deleting pod ID da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c from idIndex" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.076328530Z" level=info msg="runSandbox: removing pod sandbox da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.076343536Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.076354672Z" level=info msg="runSandbox: unmounting shmPath for sandbox da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.092431298Z" level=info msg="runSandbox: removing pod sandbox from storage: da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.095798210Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:42.095816038Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=3ffdfd42-6cb1-4f85-bf3d-628709c0451e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:42.096029 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:42.096073 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:20:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:42.096096 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:20:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:42.096146 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(da561aa890439449329e742554ed3952a480d82491595e46566541a7e460578c): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:20:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:45.995900 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:20:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:45.996368710Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:45.996409080Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.007587528Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/50d53da5-08ad-4abf-ab89-8158bbcd18d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.007610480Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:46.996260 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:46.996334 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.996652047Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.996695778Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.996803838Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:46.996833179Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:46.997266 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:20:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:46.997746 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:20:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:47.012106765Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/f3a8b894-f03b-4b3f-ac21-5867c5ebc681 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:47.012126076Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:47.013473288Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/a2fc2aed-b51b-4723-8de0-e365cfdb5a4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:47.013492685Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.035071342Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.035111084Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount has successfully entered the 'dead' state. Jan 23 17:20:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount has successfully entered the 'dead' state. Jan 23 17:20:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-59c2746b\x2d8f67\x2d4513\x2dbad7\x2df0d274422a76.mount has successfully entered the 'dead' state. Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.076301878Z" level=info msg="runSandbox: deleting pod ID 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71 from idIndex" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.076327958Z" level=info msg="runSandbox: removing pod sandbox 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.076341103Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.076352334Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.088423882Z" level=info msg="runSandbox: removing pod sandbox from storage: 4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.091468258Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:48.091487492Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=73bbeb50-c9da-4054-b010-5823143c99e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:48.091688 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:20:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:48.091732 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:20:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:48.091758 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:20:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:20:48.091811 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(4bd61aa5f50fe99798827dfccf6e88b7352f40abea4913032048b767c6686a71): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:20:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:49.996142 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:20:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:49.996450126Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:49.996488078Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:50.009937517Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9c6ea316-8d39-41ad-ad06-1df561ab9378 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:50.009957015Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:50.995720 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:50.996025291Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:50.996065098Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:51.010383633Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/e56320f0-2940-4b45-9367-ee94841eb8fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:51.010406862Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:51.995490 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:20:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:51.995880031Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:51.995925078Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:52.006624435Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/92fbc92d-3b7f-4ec0-85f7-cdb6945a1a18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:52.006647067Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:52.996438 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:20:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:52.996771362Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:52.996818989Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:53.009444469Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d774ee7b-2875-4432-91eb-348c00134a67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:53.009468948Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:54.996183 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:20:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:54.996677886Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:54.996737554Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:55.009062961Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/784d666a-4d69-483d-86d5-e804b7dac538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:55.009085733Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:20:55.996607 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:20:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:55.996973757Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:20:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:55.997025545Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:20:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:56.009053791Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/987d8b12-a57e-4e72-a0cf-d0107e959a46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:20:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:56.009094994Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:20:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:20:58.143761158Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:21:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:00.995902 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:21:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:00.996341215Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:00.996410824Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:00.996655 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:21:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:00.997307 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:21:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:01.008929252Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/638fa7c9-da54-4282-b01a-1d46da73affb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:01.008954126Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610202921Z" level=info msg="NetworkStart: stopping network for sandbox 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610280918Z" level=info msg="NetworkStart: stopping network for sandbox 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610366848Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0825d8ae-8ced-44d8-9900-a5ca36daf1cf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610391542Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610398399Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610405437Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610418948Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/314c6e7d-7747-46fe-bc7b-57dbca25c2a1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610444751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610452070Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610458015Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610547627Z" level=info msg="NetworkStart: stopping network for sandbox 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610660510Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/3d836407-72ad-4ed6-b146-f9f992c0c51a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610683028Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610689337Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.610695046Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612561397Z" level=info msg="NetworkStart: stopping network for sandbox aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612619614Z" level=info msg="NetworkStart: stopping network for sandbox 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612666182Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d000539c-3943-4213-b3b2-5b1e62b31d95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612687729Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612694026Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612700407Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612722574Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/6931d169-1d03-493e-94c0-3e624bf76ecb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612742587Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612748733Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:13.612754447Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:15.996406 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:21:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:15.996914 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:18.022439559Z" level=info msg="NetworkStart: stopping network for sandbox 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:18.022591406Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5682b084-e414-4fc9-9917-d4903313146f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:18.022613196Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:18.022620525Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:18.022627682Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:24.024514662Z" level=info msg="NetworkStart: stopping network for sandbox 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:24.024654251Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f500e31f-ac6a-463b-92df-1f369d2688d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:24.024676959Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:24.024683444Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:24.024689074Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896097 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896116 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896123 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896128 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896134 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896141 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:27.896148 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:21:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:28.142036355Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:21:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:29.996533 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:21:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:29.997054 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:21:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:31.022069768Z" level=info msg="NetworkStart: stopping network for sandbox 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:31.022215287Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/50d53da5-08ad-4abf-ab89-8158bbcd18d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:31.022238955Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:31.022246418Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:31.022253332Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025328511Z" level=info msg="NetworkStart: stopping network for sandbox 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025473452Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/f3a8b894-f03b-4b3f-ac21-5867c5ebc681 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025498145Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025505552Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025512279Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025704997Z" level=info msg="NetworkStart: stopping network for sandbox c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025820878Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/a2fc2aed-b51b-4723-8de0-e365cfdb5a4e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025843379Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025851218Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:32.025857607Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:35.024034659Z" level=info msg="NetworkStart: stopping network for sandbox f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:35.024269307Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/9c6ea316-8d39-41ad-ad06-1df561ab9378 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:35.024295160Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:35.024302097Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:35.024308951Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:36.023388882Z" level=info msg="NetworkStart: stopping network for sandbox ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:36.023528813Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/e56320f0-2940-4b45-9367-ee94841eb8fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:36.023551513Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:36.023557769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:36.023565113Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:37.020504817Z" level=info msg="NetworkStart: stopping network for sandbox 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:37.020638327Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/92fbc92d-3b7f-4ec0-85f7-cdb6945a1a18 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:37.020661396Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:37.020668318Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:37.020674796Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:38.022148685Z" level=info msg="NetworkStart: stopping network for sandbox f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:38.022293176Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d774ee7b-2875-4432-91eb-348c00134a67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:38.022315572Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:38.022321950Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:38.022328338Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:40.022691980Z" level=info msg="NetworkStart: stopping network for sandbox 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:40.022848038Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/784d666a-4d69-483d-86d5-e804b7dac538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:40.022874666Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:40.022882256Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:40.022888753Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:41.022834382Z" level=info msg="NetworkStart: stopping network for sandbox 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:41.022974746Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/987d8b12-a57e-4e72-a0cf-d0107e959a46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:41.022996373Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:41.023002291Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:41.023008702Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:44.996387 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:21:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:44.997015 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:21:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:46.023735787Z" level=info msg="NetworkStart: stopping network for sandbox a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:46.023883334Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/638fa7c9-da54-4282-b01a-1d46da73affb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:46.023906200Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:21:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:46.023913303Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:21:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:46.023921327Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:55.996168 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:21:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:55.996700 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.147291191Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.621851698Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.621892811Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.622482885Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.622507908Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.622553282Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.622587384Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.623539568Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.623570149Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.623612059Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.623647350Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount has successfully entered the 'dead' state. Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.675322946Z" level=info msg="runSandbox: deleting pod ID 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d from idIndex" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.675488210Z" level=info msg="runSandbox: removing pod sandbox 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.675504239Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.675516135Z" level=info msg="runSandbox: unmounting shmPath for sandbox 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683322765Z" level=info msg="runSandbox: deleting pod ID 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1 from idIndex" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683337275Z" level=info msg="runSandbox: deleting pod ID 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0 from idIndex" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683364446Z" level=info msg="runSandbox: removing pod sandbox 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683408922Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683344898Z" level=info msg="runSandbox: removing pod sandbox 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683429838Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683432348Z" level=info msg="runSandbox: unmounting shmPath for sandbox 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683443893Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683324902Z" level=info msg="runSandbox: deleting pod ID aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72 from idIndex" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683487094Z" level=info msg="runSandbox: removing pod sandbox aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683499232Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683514070Z" level=info msg="runSandbox: unmounting shmPath for sandbox aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683327592Z" level=info msg="runSandbox: deleting pod ID 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb from idIndex" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683574895Z" level=info msg="runSandbox: removing pod sandbox 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683594370Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.683613296Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.695442875Z" level=info msg="runSandbox: removing pod sandbox from storage: 0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.695455324Z" level=info msg="runSandbox: removing pod sandbox from storage: 337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.698444488Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.698462725Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=054ebec2-d143-4588-adaf-afcbc3fa3040 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.698703 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.698749 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.698771 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.698818 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.699457624Z" level=info msg="runSandbox: removing pod sandbox from storage: 7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.699459368Z" level=info msg="runSandbox: removing pod sandbox from storage: 116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.699461107Z" level=info msg="runSandbox: removing pod sandbox from storage: aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.701701864Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.701723161Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1fe3a09b-ac97-475b-8ef6-f45eb4f01c6b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.701888 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.701926 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.701950 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.701995 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.704676623Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.704694659Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=f540017c-002a-4760-90ac-8320e6a2b30e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.704882 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.704913 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.704934 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.704976 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.707671901Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.707692193Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e64a2800-6a7a-4a04-86d9-c4e37b51e45b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.707911 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.707956 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.707979 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.708026 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.710523730Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.710540890Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=dad2ffee-a64d-4fbb-b7d3-848bc9b08efd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.710706 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.710742 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.710762 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:21:58.710800 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:58.747490 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:58.747586 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:58.747706 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.747834095Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.747870029Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.747908405Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:58.747926 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.747950126Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.747835328Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.748026805Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:21:58.748081 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.748108777Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.748134014Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.748347215Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.748372970Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.775889693Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d621459e-430a-46c4-a0db-ce1993733ec6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.775927023Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.775896320Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7f49f096-04ac-4953-8ef9-ce5426003596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.775964223Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.778775193Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3d59f8f3-f396-4137-a673-d735bc073bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.778797387Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.781400939Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/1beffe1b-c682-4c73-a740-1b6cb877c6fb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.781421048Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.782829677Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a16de83f-4b7a-48ec-92cd-c1f522051c8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:21:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:21:58.782850731Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6931d169\x2d1d03\x2d493e\x2d94c0\x2d3e624bf76ecb.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d000539c\x2d3943\x2d4213\x2db3b2\x2d5b1e62b31d95.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d836407\x2d72ad\x2d4ed6\x2db146\x2df9f992c0c51a.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-314c6e7d\x2d7747\x2d46fe\x2dbc7b\x2d57dbca25c2a1.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0825d8ae\x2d8ced\x2d44d8\x2d9900\x2da5ca36daf1cf.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-116b5900f37455b53b08e15948117037718b3d0c999c2a5b425c5f63ec281ab0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0f203288d438c768539da6786d08920e1a2e3adea59f5bb9fb849605fdcc68cb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aae74129b18de460fa18cf1df69b3fccced72b611adadf06a08da3e514315d72-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7453c615dd629304089aab390afb20c3ca9a90950e5adfa25d39c644ce0042b1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:21:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-337da008c65cc552e4cc897a4599c0c343e836688a59d9d5ec2609975d00c87d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.033417978Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.033463129Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount has successfully entered the 'dead' state. Jan 23 17:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount has successfully entered the 'dead' state. Jan 23 17:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5682b084\x2de414\x2d4fc9\x2d9917\x2dd4903313146f.mount has successfully entered the 'dead' state. Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.072308323Z" level=info msg="runSandbox: deleting pod ID 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e from idIndex" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.072338382Z" level=info msg="runSandbox: removing pod sandbox 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.072355275Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.072369243Z" level=info msg="runSandbox: unmounting shmPath for sandbox 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.092484702Z" level=info msg="runSandbox: removing pod sandbox from storage: 328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.095349949Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:03.095369465Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=e10852da-f2ee-4d6b-a4e7-637160ef13fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:03.095580 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:03.095627 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:03.095651 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:03.095704 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(328eb7b1aa60a40be004388a97f00aaa1bcb9d0bf4aa25b70d18ad7325b3537e): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.035243077Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.035281851Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount has successfully entered the 'dead' state. Jan 23 17:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount has successfully entered the 'dead' state. Jan 23 17:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f500e31f\x2dac6a\x2d463b\x2d92df\x2d1f369d2688d0.mount has successfully entered the 'dead' state. Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.081357340Z" level=info msg="runSandbox: deleting pod ID 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064 from idIndex" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.081383511Z" level=info msg="runSandbox: removing pod sandbox 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.081405508Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.081422738Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.097432584Z" level=info msg="runSandbox: removing pod sandbox from storage: 0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.101435123Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:09.101452675Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=4c4ae85e-e330-44ab-b00a-3e7a734e76fb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:09.101668 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:09.101842 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:09.101865 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:09.101918 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0d81703f139fbdc27d63892b69877baa070275278f47154d5b1462c3dbfe1064): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:09.997062 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:22:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:09.997576 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.033077308Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.033115660Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount has successfully entered the 'dead' state. Jan 23 17:22:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount has successfully entered the 'dead' state. Jan 23 17:22:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-50d53da5\x2d08ad\x2d4abf\x2dab89\x2d8158bbcd18d4.mount has successfully entered the 'dead' state. Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.077286364Z" level=info msg="runSandbox: deleting pod ID 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7 from idIndex" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.077311805Z" level=info msg="runSandbox: removing pod sandbox 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.077325769Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.077341198Z" level=info msg="runSandbox: unmounting shmPath for sandbox 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.097417456Z" level=info msg="runSandbox: removing pod sandbox from storage: 534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.100967191Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.100985300Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=3b0e6b59-3d3a-49f4-b933-1ddebd8bb1e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:16.101203 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:16.101254 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:22:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:16.101278 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:22:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:16.101325 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(534db9b7d14c684c71fb1211ab84097ffdaf0e021c41f02ee46cb7e028b310c7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:22:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:16.996272 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.996636801Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:16.996683057Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.008782980Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/02c38a48-a80f-4d18-b857-79b5361ccebd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.008806235Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.036091622Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.036134742Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.036817475Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.036873107Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount has successfully entered the 'dead' state. Jan 23 17:22:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount has successfully entered the 'dead' state. Jan 23 17:22:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount has successfully entered the 'dead' state. Jan 23 17:22:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount has successfully entered the 'dead' state. Jan 23 17:22:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a2fc2aed\x2db51b\x2d4723\x2d8de0\x2de365cfdb5a4e.mount has successfully entered the 'dead' state. Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085323257Z" level=info msg="runSandbox: deleting pod ID c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1 from idIndex" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085353234Z" level=info msg="runSandbox: removing pod sandbox c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085369213Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085389762Z" level=info msg="runSandbox: unmounting shmPath for sandbox c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085326614Z" level=info msg="runSandbox: deleting pod ID 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0 from idIndex" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085439363Z" level=info msg="runSandbox: removing pod sandbox 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085454905Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.085467487Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.101467323Z" level=info msg="runSandbox: removing pod sandbox from storage: 9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.101473616Z" level=info msg="runSandbox: removing pod sandbox from storage: c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.104664169Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.104684351Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=fa550ff1-24c7-4502-b69f-821e8da6d1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.104944 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.104991 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.105015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.105065 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.107809277Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:17.107828469Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=3f94f7aa-a5e2-4940-a3c1-be0b117efdf5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.108057 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.108103 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.108126 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:17.108176 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:22:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f3a8b894\x2df03b\x2d4b3f\x2dac21\x2d5867c5ebc681.mount has successfully entered the 'dead' state. Jan 23 17:22:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c87c90310b3f8d87034558e64d2168dd398b319c57568326ccad157c0764fdd1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9941af051fae6d127021accc78c0c5999adf405ffc093beb1c620ab9387e28b0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.035164163Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.035200366Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount has successfully entered the 'dead' state. Jan 23 17:22:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount has successfully entered the 'dead' state. Jan 23 17:22:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9c6ea316\x2d8d39\x2d41ad\x2dad06\x2d1df561ab9378.mount has successfully entered the 'dead' state. Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.085279752Z" level=info msg="runSandbox: deleting pod ID f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39 from idIndex" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.085304369Z" level=info msg="runSandbox: removing pod sandbox f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.085317283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.085329033Z" level=info msg="runSandbox: unmounting shmPath for sandbox f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.101433807Z" level=info msg="runSandbox: removing pod sandbox from storage: f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.104956554Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:20.104974955Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=4898eeb5-2d31-4339-ae64-e73259291874 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:20.105176 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:20.105228 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:20.105250 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:22:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:20.105295 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f1cd08b1e5c3403b433d7742f6a377893fac4c413bb18aa7b47fde0ad83baa39): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.034470322Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.034511298Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount has successfully entered the 'dead' state. Jan 23 17:22:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount has successfully entered the 'dead' state. Jan 23 17:22:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e56320f0\x2d2940\x2d4b45\x2d9367\x2dee94841eb8fc.mount has successfully entered the 'dead' state. Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.089283168Z" level=info msg="runSandbox: deleting pod ID ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a from idIndex" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.089306668Z" level=info msg="runSandbox: removing pod sandbox ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.089319434Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.089333782Z" level=info msg="runSandbox: unmounting shmPath for sandbox ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.109431579Z" level=info msg="runSandbox: removing pod sandbox from storage: ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.112699041Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.112716757Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=42337ab4-1baf-483b-aa88-693eb77554fe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:21.112937 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:21.112997 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:21.113019 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:21.113065 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(ad4aad1e87ef8694783ec0f53d246bfb5b8a98f65f3f4fc2bf1dee7ecb31d72a): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:22:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:21.996640 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.996965103Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:21.997005530Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.011396983Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/5677485b-060c-4913-b60d-02244fd82c5a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.011664062Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.031118471Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.031152492Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount has successfully entered the 'dead' state. Jan 23 17:22:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount has successfully entered the 'dead' state. Jan 23 17:22:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-92fbc92d\x2d3b7f\x2d4ec0\x2d85f7\x2dcdb6945a1a18.mount has successfully entered the 'dead' state. Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.074305288Z" level=info msg="runSandbox: deleting pod ID 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56 from idIndex" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.074327400Z" level=info msg="runSandbox: removing pod sandbox 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.074339855Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.074350160Z" level=info msg="runSandbox: unmounting shmPath for sandbox 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.090436621Z" level=info msg="runSandbox: removing pod sandbox from storage: 48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.093248642Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:22.093267167Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=56bcc9fb-33d3-4629-92ae-f4ea609db949 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:22.093491 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:22.093536 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:22:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:22.093560 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:22:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:22.093625 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(48b8b27b219ffd4ef18503517dbce9a45993b6c554c5a64d55988382a8320c56): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.032745687Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.032786551Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount has successfully entered the 'dead' state. Jan 23 17:22:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount has successfully entered the 'dead' state. Jan 23 17:22:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d774ee7b\x2d2875\x2d4432\x2d91eb\x2d348c00134a67.mount has successfully entered the 'dead' state. Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.084316403Z" level=info msg="runSandbox: deleting pod ID f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3 from idIndex" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.084343182Z" level=info msg="runSandbox: removing pod sandbox f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.084360505Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.084374496Z" level=info msg="runSandbox: unmounting shmPath for sandbox f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.096428097Z" level=info msg="runSandbox: removing pod sandbox from storage: f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.099907503Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:23.099926114Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7527171b-a9a7-497a-b3ef-03751325f93d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:23.100082 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:23.100124 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:23.100147 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:23.100190 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(f216d94d6ca7654febf05d5757d7f8badd19128e76e4ebcc87c4f056f8b5f0d3): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:23.996632 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:22:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:23.997152 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.033450852Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.033496910Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount has successfully entered the 'dead' state. Jan 23 17:22:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount has successfully entered the 'dead' state. Jan 23 17:22:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-784d666a\x2d4d69\x2d483d\x2d86d5\x2de804b7dac538.mount has successfully entered the 'dead' state. Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.072412335Z" level=info msg="runSandbox: deleting pod ID 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f from idIndex" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.072441981Z" level=info msg="runSandbox: removing pod sandbox 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.072460919Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.072480971Z" level=info msg="runSandbox: unmounting shmPath for sandbox 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.080471561Z" level=info msg="runSandbox: removing pod sandbox from storage: 83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.083857974Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:25.083881606Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e49b51c7-2809-47b8-b782-0d2d46ff9106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:25.084110 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:25.084160 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:25.084186 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:25.084244 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(83b8f87bd9059d85d23d629aa8ec09df25a2e2a3d6198fd2ef384cadb02d444f): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.034541476Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.034576210Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount has successfully entered the 'dead' state. Jan 23 17:22:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount has successfully entered the 'dead' state. Jan 23 17:22:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-987d8b12\x2da57e\x2d4e72\x2da0cf\x2dd0107e959a46.mount has successfully entered the 'dead' state. Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.079310383Z" level=info msg="runSandbox: deleting pod ID 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71 from idIndex" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.079335358Z" level=info msg="runSandbox: removing pod sandbox 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.079348928Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.079359528Z" level=info msg="runSandbox: unmounting shmPath for sandbox 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.095440285Z" level=info msg="runSandbox: removing pod sandbox from storage: 15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.098851331Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:26.098871034Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=60bc2db8-0f69-4535-9905-05bac33670ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:26.099058 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:26.099103 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:26.099128 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:26.099172 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(15c0d6aa22ce813157337bb577c656ce926240dc6c45a76a4cde99212eb0ed71): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896587 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896608 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896613 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896620 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896626 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896633 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.896638 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:22:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:27.996733 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:27.997093792Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:27.997143558Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:28.008795199Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/56a26613-e1e0-47b0-9b0a-62333fd4b25b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:28.008822742Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:28.142287208Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:22:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:30.996051 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:22:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:30.996557430Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:30.996611554Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.011541876Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e494db9b-54cd-4df0-9b8f-54e817a702b1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.011567902Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.035263521Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.035302190Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount has successfully entered the 'dead' state. Jan 23 17:22:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount has successfully entered the 'dead' state. Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.074305130Z" level=info msg="runSandbox: deleting pod ID a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea from idIndex" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.074328888Z" level=info msg="runSandbox: removing pod sandbox a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.074342226Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.074353467Z" level=info msg="runSandbox: unmounting shmPath for sandbox a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.094426434Z" level=info msg="runSandbox: removing pod sandbox from storage: a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.097260722Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:31.097279317Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a61a6e6b-f5f9-4d1b-acda-c766e51fddc9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:31.097541 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:22:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:31.097584 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:31.097608 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:31.097655 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:22:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-638fa7c9\x2dda54\x2d4282\x2db01a\x2d1d46da73affb.mount has successfully entered the 'dead' state. Jan 23 17:22:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a441e927c780ea55814a677d847001f6d1f59b8442e3653aae5c74712d7b3cea-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:22:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:32.995765 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:22:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:32.996123723Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:32.996162039Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:33.008187045Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1cd86a88-ef83-4995-8ba9-bff4701ddd66 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:33.008410214Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:34.995432 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:22:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:34.995692378Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:34.995737521Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.006786476Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6a8e9ca2-d347-40c8-9960-b1d4000decec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.006814076Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:35.996455 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.996765193Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.996803988Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:35.996778 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.997129759Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:35.997174790Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.011156896Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/74b951e9-36cc-4a9c-999a-4826e6108cf9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.011178469Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.012748297Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d0c655c3-2dfa-457c-a22a-463a72bf248a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.012770763Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:36.995437 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.995684207Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:36.995729563Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.009049480Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/f9bf019c-4ca3-49ed-a0a9-429d7c6fe1e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.009075073Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:37.996550 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:22:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:37.996647 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.996917352Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.996953028Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.997099848Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:37.997126427Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:38.012144417Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6a2db5f5-e990-4eeb-9859-fe3e246c1a98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:38.012165085Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:38.012712677Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/6127373a-6b1f-4a49-9f39-20074e1f1f74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:38.012731890Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:38.996837 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:22:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:38.997342 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790511365Z" level=info msg="NetworkStart: stopping network for sandbox b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790534674Z" level=info msg="NetworkStart: stopping network for sandbox db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790669208Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d621459e-430a-46c4-a0db-ce1993733ec6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790692336Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790699710Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790706623Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790754594Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7f49f096-04ac-4953-8ef9-ce5426003596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790779424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790786101Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.790792992Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.793154214Z" level=info msg="NetworkStart: stopping network for sandbox c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.793296142Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/3d59f8f3-f396-4137-a673-d735bc073bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.793319279Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.793326938Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.793333242Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.794797908Z" level=info msg="NetworkStart: stopping network for sandbox e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.794897992Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/1beffe1b-c682-4c73-a740-1b6cb877c6fb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.794919183Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.794925699Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.794931157Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.796277906Z" level=info msg="NetworkStart: stopping network for sandbox a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.796389324Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a16de83f-4b7a-48ec-92cd-c1f522051c8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.796408256Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.796414821Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:22:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:43.796420792Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:44.995614 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:22:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:44.996113182Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:22:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:44.996152431Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:22:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:45.006801757Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/8d5d79d1-dc98-4978-90ca-d5f95b40e659 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:22:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:45.006822036Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:22:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:22:53.996956 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:22:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:22:53.997479 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:22:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:22:58.142884461Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:23:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:02.022253968Z" level=info msg="NetworkStart: stopping network for sandbox 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:02.022407925Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/02c38a48-a80f-4d18-b857-79b5361ccebd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:02.022430994Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:02.022438395Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:02.022445121Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:05.997130 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.997929403Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=81d2d4b0-6795-4772-b25f-bbc091cc78f0 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.998068992Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=81d2d4b0-6795-4772-b25f-bbc091cc78f0 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.998585826Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d0992524-bbff-42b1-9801-9292007d3e7e name=/runtime.v1.ImageService/ImageStatus Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.998689242Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d0992524-bbff-42b1-9801-9292007d3e7e name=/runtime.v1.ImageService/ImageStatus Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.999490814Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=450be5bc-06ac-4894-88aa-da642e067771 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:23:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:05.999564549Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope. -- Subject: Unit crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf. -- Subject: Unit crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.116094694Z" level=info msg="Created container f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=450be5bc-06ac-4894-88aa-da642e067771 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.116491249Z" level=info msg="Starting container: f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" id=6fb28156-5574-43b6-ad4e-65be429ea40b name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.122992644Z" level=info msg="Started container" PID=136271 containerID=f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=6fb28156-5574-43b6-ad4e-65be429ea40b name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.127985084Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.138683705Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.138704212Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.138717562Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.148726079Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.148746544Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.148761809Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.157749612Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.157764392Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.157772759Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.166019146Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.166040537Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:23:06 hub-master-0.workload.bos2.lab conmon[136250]: conmon f2257fc741579468bf0f : container 136271 exited with status 1 Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has successfully entered the 'dead' state. Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope: Consumed 557ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope completed and consumed the indicated resources. Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope has successfully entered the 'dead' state. Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope: Consumed 50ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf.scope completed and consumed the indicated resources. Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.870294 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/192.log" Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.870839 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/191.log" Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.871940 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" exitCode=1 Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.871962 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf} Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.871982 8631 scope.go:115] "RemoveContainer" containerID="89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.872725327Z" level=info msg="Removing container: 89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29" id=a8000243-f5ed-480d-bf21-cdb7ee528be9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:06.872853 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:23:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:06.873426 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:23:06 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-d97e3223e2dff1953d8effde008ca8bd687b492e4c3526b8bcba78d17e721152-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-d97e3223e2dff1953d8effde008ca8bd687b492e4c3526b8bcba78d17e721152-merged.mount has successfully entered the 'dead' state. Jan 23 17:23:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:06.901190077Z" level=info msg="Removed container 89160cc1619d68f21304da249b68965eda0bfa7716ae5f611796b6888e6f3d29: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=a8000243-f5ed-480d-bf21-cdb7ee528be9 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:07.026303121Z" level=info msg="NetworkStart: stopping network for sandbox 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:07.026424389Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/5677485b-060c-4913-b60d-02244fd82c5a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:07.026445799Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:07.026452016Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:07.026458033Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:07.874929 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/192.log" Jan 23 17:23:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:10.668081 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:23:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:10.668992 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:23:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:10.669511 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:23:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:13.022254007Z" level=info msg="NetworkStart: stopping network for sandbox 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:13.022415070Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/56a26613-e1e0-47b0-9b0a-62333fd4b25b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:13.022439973Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:13.022447708Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:13.022456172Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:16.024414266Z" level=info msg="NetworkStart: stopping network for sandbox 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:16.024560712Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/e494db9b-54cd-4df0-9b8f-54e817a702b1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:16.024585224Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:16.024592213Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:16.024599357Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:18.022336133Z" level=info msg="NetworkStart: stopping network for sandbox 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:18.022468003Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/1cd86a88-ef83-4995-8ba9-bff4701ddd66 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:18.022489604Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:18.022496178Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:18.022503076Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:20.020488726Z" level=info msg="NetworkStart: stopping network for sandbox 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:20.020650755Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6a8e9ca2-d347-40c8-9960-b1d4000decec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:20.020678677Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:20.020685824Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:20.020693449Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.024252053Z" level=info msg="NetworkStart: stopping network for sandbox 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.024400734Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/74b951e9-36cc-4a9c-999a-4826e6108cf9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.024425795Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.024432344Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.024440196Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.027740230Z" level=info msg="NetworkStart: stopping network for sandbox 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.027843273Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d0c655c3-2dfa-457c-a22a-463a72bf248a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.027863422Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.027870359Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:21.027876055Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:22.022453713Z" level=info msg="NetworkStart: stopping network for sandbox b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:22.022609068Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/f9bf019c-4ca3-49ed-a0a9-429d7c6fe1e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:22.022635868Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:22.022643133Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:22.022650988Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:22.996285 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:23:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:22.996791 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026134065Z" level=info msg="NetworkStart: stopping network for sandbox 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026135252Z" level=info msg="NetworkStart: stopping network for sandbox 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026279070Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/6127373a-6b1f-4a49-9f39-20074e1f1f74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026302506Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026308826Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026315073Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026321613Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/6a2db5f5-e990-4eeb-9859-fe3e246c1a98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026347347Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026354996Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:23.026361384Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.896952 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.896973 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.896986 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.896995 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.897001 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.897008 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:27.897014 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.143385198Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.801888052Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.801928221Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.801959436Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.802002803Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.804744460Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.804774424Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.805555980Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.805585812Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.806547044Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.806572247Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount has successfully entered the 'dead' state. Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848327342Z" level=info msg="runSandbox: deleting pod ID a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679 from idIndex" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848367425Z" level=info msg="runSandbox: removing pod sandbox a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848382511Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848395829Z" level=info msg="runSandbox: unmounting shmPath for sandbox a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848331607Z" level=info msg="runSandbox: deleting pod ID b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6 from idIndex" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848608383Z" level=info msg="runSandbox: removing pod sandbox b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848621614Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.848634174Z" level=info msg="runSandbox: unmounting shmPath for sandbox b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.857278436Z" level=info msg="runSandbox: deleting pod ID db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b from idIndex" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.857302011Z" level=info msg="runSandbox: removing pod sandbox db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.857313580Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.857323892Z" level=info msg="runSandbox: unmounting shmPath for sandbox db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864293981Z" level=info msg="runSandbox: deleting pod ID e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18 from idIndex" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864319848Z" level=info msg="runSandbox: removing pod sandbox e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864331732Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864352787Z" level=info msg="runSandbox: unmounting shmPath for sandbox e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864298364Z" level=info msg="runSandbox: deleting pod ID c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c from idIndex" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864416049Z" level=info msg="runSandbox: removing pod sandbox c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864428458Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.864440869Z" level=info msg="runSandbox: unmounting shmPath for sandbox c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.868478718Z" level=info msg="runSandbox: removing pod sandbox from storage: b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.868483557Z" level=info msg="runSandbox: removing pod sandbox from storage: a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.872315617Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.872332906Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=7120e021-84c6-4374-9275-c4c1019a4d07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.872527 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.872579 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.872604 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.872665 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.873420434Z" level=info msg="runSandbox: removing pod sandbox from storage: db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.875437092Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.875460962Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c0898b0d-cf2c-4b9e-9c9a-dfe52f5f1511 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.875689 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.875735 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.875770 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.875823 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.876558671Z" level=info msg="runSandbox: removing pod sandbox from storage: c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.876558797Z" level=info msg="runSandbox: removing pod sandbox from storage: e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.878430647Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.878449547Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e60e9727-5034-4c8a-90b9-85bc7493af36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.878652 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.878687 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.878709 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.878759 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.881421383Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.881439672Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b2b78e49-57f6-4da9-ad0b-3c944e6cd046 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.881667 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.881710 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.881731 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.881770 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.884654377Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.884674764Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1d4a3f28-19a9-4803-87bb-eedf3cfc3a2e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.884852 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.884883 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.884902 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:28.884936 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:28.918415 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:28.918608 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:28.918712 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918759623Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918790429Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:28.918793 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:23:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:28.918879 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918871522Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918899969Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918899252Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918975199Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918995767Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.919019829Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.918945392Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.919077178Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.936556347Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/8f032973-9455-419a-a705-6cf4abd664df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.936582524Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.936817072Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/e3d7cc74-baab-4402-9c58-0c61c48e101b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.936836010Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.952176642Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/73382375-1fca-4c19-b657-208d49407fde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.952202876Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.953957118Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/1aedbc5f-7c95-4e53-aeab-eb27e5e04ae6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.953977481Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.954878846Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/46089bb5-bc87-469e-a467-29c36899cde4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:28.954896294Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a16de83f\x2d4b7a\x2d48ec\x2d92cd\x2dc1f522051c8d.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1beffe1b\x2dc682\x2d4c73\x2da740\x2d1b6cb877c6fb.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d59f8f3\x2df396\x2d4137\x2da673\x2dd735bc073bc0.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7f49f096\x2d04ac\x2d4953\x2d8ef9\x2dce5426003596.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d621459e\x2d430a\x2d46c4\x2da0db\x2dce1993733ec6.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-db68fa3b00661e8c3ce834d0e3870bf76879e333ac39f7904b4c6f41ab8be87b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a4d85a09fa5c10d37c3684ba5386b6342c5b47162bff256da7054594e7e35679-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e3d9e1835f0d8c75907cd96df58b66b2abf06f65755d6a03ef8ca8047a144b18-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c6140382d5e2294053f26d659617e386b3502f2b4ca78a2f3db2a308406dde6c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b1fbc4bb703eb24a4546f67c77e87b00126e957d55fc9cf8040bbc9e23ff53e6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:30.020940836Z" level=info msg="NetworkStart: stopping network for sandbox 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:30.021108502Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/8d5d79d1-dc98-4978-90ca-d5f95b40e659 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:23:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:30.021133130Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:23:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:30.021140337Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:23:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:30.021148522Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:23:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:36.996501 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:23:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:36.997162 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1190] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1196] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1196] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1198] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1203] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:23:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494618.1207] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:23:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494620.1184] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.033271620Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.033475154Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount has successfully entered the 'dead' state. Jan 23 17:23:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount has successfully entered the 'dead' state. Jan 23 17:23:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-02c38a48\x2da80f\x2d4d18\x2db857\x2d79b5361ccebd.mount has successfully entered the 'dead' state. Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.075313982Z" level=info msg="runSandbox: deleting pod ID 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200 from idIndex" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.075342387Z" level=info msg="runSandbox: removing pod sandbox 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.075358323Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.075374784Z" level=info msg="runSandbox: unmounting shmPath for sandbox 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.095404699Z" level=info msg="runSandbox: removing pod sandbox from storage: 28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.098293813Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:47.098313600Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=44bbb517-51e0-411c-8725-ca819e2f901f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:47.098549 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:47.098606 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:23:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:47.098635 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:23:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:47.098699 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(28a639225a25a5a91497973fc47b2fb0d27b4862b17b46c5ced5976c72383200): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:23:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:50.996272 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:23:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:50.996913 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.037202544Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.037255854Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount has successfully entered the 'dead' state. Jan 23 17:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount has successfully entered the 'dead' state. Jan 23 17:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5677485b\x2d060c\x2d4913\x2db60d\x2d02244fd82c5a.mount has successfully entered the 'dead' state. Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.081352711Z" level=info msg="runSandbox: deleting pod ID 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48 from idIndex" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.081376995Z" level=info msg="runSandbox: removing pod sandbox 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.081390757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.081409597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.097455224Z" level=info msg="runSandbox: removing pod sandbox from storage: 52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.101107988Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:52.101125777Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=297626f2-ad7d-4eb7-b796-0662171dc98e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:52.101476 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:52.101521 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:52.101543 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:23:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:52.101586 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(52d0b4b7b6333a6f7b239d3d93c689a1bf309cac68f8713ed555bdce16e68b48): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.033160425Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.033218776Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount has successfully entered the 'dead' state. Jan 23 17:23:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount has successfully entered the 'dead' state. Jan 23 17:23:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-56a26613\x2de1e0\x2d47b0\x2d9b0a\x2d62333fd4b25b.mount has successfully entered the 'dead' state. Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.070313463Z" level=info msg="runSandbox: deleting pod ID 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2 from idIndex" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.070341273Z" level=info msg="runSandbox: removing pod sandbox 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.070358101Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.070370952Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.094464626Z" level=info msg="runSandbox: removing pod sandbox from storage: 8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.097974917Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.097996233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=da02b529-f463-4afb-b574-94efda80a9c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:58.098251 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:23:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:58.098301 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:23:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:58.098329 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:23:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:23:58.098390 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(8be425f4c33aecfb12670242051c9f2fbe2e443e1351ba9a0c43bd7930ba2dd2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:23:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:58.143786904Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:23:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:23:59.996295 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:23:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:59.996606612Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:23:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:23:59.996805709Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:00.008798718Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/06b288c7-7edc-4e39-8b99-3240579bf449 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:00.008826517Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.036096775Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.036130226Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount has successfully entered the 'dead' state. Jan 23 17:24:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount has successfully entered the 'dead' state. Jan 23 17:24:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e494db9b\x2d54cd\x2d4df0\x2d9b8f\x2d54e817a702b1.mount has successfully entered the 'dead' state. Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.075306099Z" level=info msg="runSandbox: deleting pod ID 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c from idIndex" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.075332771Z" level=info msg="runSandbox: removing pod sandbox 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.075346229Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.075357900Z" level=info msg="runSandbox: unmounting shmPath for sandbox 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.087487315Z" level=info msg="runSandbox: removing pod sandbox from storage: 488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.090670593Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:01.090690402Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=518d9816-8a55-4d94-b904-0d83bc5277f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:01.090897 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:01.090939 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:24:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:01.090962 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:24:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:01.091004 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(488abb698ab0881a2a2711603a14e459c627159bbda68ee48b9d585b6673ae3c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.033040438Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.033076114Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount has successfully entered the 'dead' state. Jan 23 17:24:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount has successfully entered the 'dead' state. Jan 23 17:24:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1cd86a88\x2def83\x2d4995\x2d8ba9\x2dbff4701ddd66.mount has successfully entered the 'dead' state. Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.072312542Z" level=info msg="runSandbox: deleting pod ID 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d from idIndex" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.072336143Z" level=info msg="runSandbox: removing pod sandbox 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.072350736Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.072361891Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.088410590Z" level=info msg="runSandbox: removing pod sandbox from storage: 2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.091872887Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:03.091889643Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bd358b13-6e53-4b04-ba9e-95f7dd2eb62e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:03.092086 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:03.092136 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:24:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:03.092160 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:24:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:03.092220 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2473962f46d13a5addc6c18f8d7ff952b185e44742fdfc68473e0aed7e255a9d): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:24:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:04.996684 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:24:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:04.997198 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.032152363Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.032204214Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount has successfully entered the 'dead' state. Jan 23 17:24:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount has successfully entered the 'dead' state. Jan 23 17:24:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6a8e9ca2\x2dd347\x2d40c8\x2d9960\x2db1d4000decec.mount has successfully entered the 'dead' state. Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.072305269Z" level=info msg="runSandbox: deleting pod ID 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10 from idIndex" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.072331715Z" level=info msg="runSandbox: removing pod sandbox 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.072354146Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.072368597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.088440339Z" level=info msg="runSandbox: removing pod sandbox from storage: 67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.092137561Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:05.092155764Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=b6e89c33-1f01-485b-8e8c-fb7acb70eca9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:05.092387 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:05.092429 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:05.092463 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:24:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:05.092514 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(67762ac644b0c206480ee848323e22299b9ca1779db69a016f9670c0c28fdf10): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.035259607Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.035297694Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.037964034Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.037994444Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d0c655c3\x2d2dfa\x2d457c\x2da22a\x2d463a72bf248a.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-74b951e9\x2d36cc\x2d4a9c\x2d999a\x2d4826e6108cf9.mount has successfully entered the 'dead' state. Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086328424Z" level=info msg="runSandbox: deleting pod ID 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b from idIndex" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086356200Z" level=info msg="runSandbox: removing pod sandbox 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086369108Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086380909Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086331257Z" level=info msg="runSandbox: deleting pod ID 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38 from idIndex" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086442102Z" level=info msg="runSandbox: removing pod sandbox 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086454333Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.086467939Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.102444975Z" level=info msg="runSandbox: removing pod sandbox from storage: 6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.102454789Z" level=info msg="runSandbox: removing pod sandbox from storage: 9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.105847366Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.105864633Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=26db0fb2-28ab-41e2-b6b9-a36843a92476 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.106092 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.106252 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.106275 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.106320 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.108806929Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.108824659Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6f20fd07-3168-4e45-984f-082a894263b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.109028 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.109062 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.109082 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:06.109120 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:24:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:06.995499 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.995835640Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:06.995874749Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.006517807Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/d394bc7f-0db9-4ee0-b6f9-d5cb749c1168 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.006538595Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.035194829Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.035240684Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9bf88530bc7924f73f9ebbb222415bda4165623bd963145fb5cbd7476dc6ab38-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6fdf669d499b8b469ac2f5126527f7ec92e7d3df11cf8535a5e9a8630ee7490b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f9bf019c\x2d4ca3\x2d49ed\x2da0a9\x2d429d7c6fe1e8.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.075290421Z" level=info msg="runSandbox: deleting pod ID b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422 from idIndex" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.075318997Z" level=info msg="runSandbox: removing pod sandbox b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.075336330Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.075352456Z" level=info msg="runSandbox: unmounting shmPath for sandbox b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.095462732Z" level=info msg="runSandbox: removing pod sandbox from storage: b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.101826023Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:07.101848262Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=ee78fae3-20c5-4827-9d1c-b581ceff47c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:07.102037 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:07.102079 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:07.102101 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:07.102148 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(b29d7f094410bf665921b3583c407d80e6485f998539752c6e0da7463d54d422): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.036731617Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.036763818Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.037241481Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.037272300Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6127373a\x2d6b1f\x2d4a49\x2d9f39\x2d20074e1f1f74.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6a2db5f5\x2de990\x2d4eeb\x2d9859\x2dfe3e246c1a98.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072299329Z" level=info msg="runSandbox: deleting pod ID 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e from idIndex" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072327954Z" level=info msg="runSandbox: removing pod sandbox 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072341864Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072354474Z" level=info msg="runSandbox: unmounting shmPath for sandbox 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072301485Z" level=info msg="runSandbox: deleting pod ID 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9 from idIndex" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072411350Z" level=info msg="runSandbox: removing pod sandbox 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072423484Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.072435714Z" level=info msg="runSandbox: unmounting shmPath for sandbox 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.084450060Z" level=info msg="runSandbox: removing pod sandbox from storage: 224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.084450091Z" level=info msg="runSandbox: removing pod sandbox from storage: 845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.087824181Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.087841354Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=92ecdb9f-916f-4915-8e23-55a7ab0f4527 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.088064 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.088107 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.088131 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.088177 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(224cb453c13a2738201a1fb0479d16566561adfdcfc776ba959b55b6baa9f3b9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.090747817Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.090764690Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=7e59f1c9-f370-4ee3-910d-7ddf7817d779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.090949 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.090982 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.091002 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:08.091038 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(845bb468dd5ca8c50c3cd1d67a62d2799460ab225f69f0c357830db9a0de668e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:24:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:08.995573 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.995913721Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:08.995951775Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:09.006305037Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4d6dde43-cebc-42b9-97e9-d1c2387e4021 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:09.006325849Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952198949Z" level=info msg="NetworkStart: stopping network for sandbox 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952372214Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/8f032973-9455-419a-a705-6cf4abd664df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952395341Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952401924Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952408788Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952506380Z" level=info msg="NetworkStart: stopping network for sandbox e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952621729Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/e3d7cc74-baab-4402-9c58-0c61c48e101b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952643140Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952649123Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.952656647Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.965614606Z" level=info msg="NetworkStart: stopping network for sandbox 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.965739838Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/73382375-1fca-4c19-b657-208d49407fde Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.965763290Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.965771070Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.965778488Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.966558721Z" level=info msg="NetworkStart: stopping network for sandbox a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.966686982Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/1aedbc5f-7c95-4e53-aeab-eb27e5e04ae6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.966711196Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.966721697Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.966731633Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.967691441Z" level=info msg="NetworkStart: stopping network for sandbox d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.967794074Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/46089bb5-bc87-469e-a467-29c36899cde4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.967814376Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.967820134Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.967825906Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:13.995966 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.996303752Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:13.996338269Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:14.007030911Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/64dd10cc-f4ca-4da6-93a4-2f0454556e5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:14.007048967Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:14.996437 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:24:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:14.996811214Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:14.997056068Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.007919211Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4046cbf5-c229-4cab-8097-9d0396d91f79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.007938132Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.031868727Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.031910757Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount has successfully entered the 'dead' state. Jan 23 17:24:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount has successfully entered the 'dead' state. Jan 23 17:24:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8d5d79d1\x2ddc98\x2d4978\x2d90ca\x2dd5f95b40e659.mount has successfully entered the 'dead' state. Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.066284087Z" level=info msg="runSandbox: deleting pod ID 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751 from idIndex" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.066311678Z" level=info msg="runSandbox: removing pod sandbox 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.066328992Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.066343827Z" level=info msg="runSandbox: unmounting shmPath for sandbox 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.079459541Z" level=info msg="runSandbox: removing pod sandbox from storage: 17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.082397157Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:15.082416137Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=9859d08b-f3c0-4aa0-ba9a-0ac0caf6e441 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:15.082565 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:15.082606 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:15.082629 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:15.082679 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:24:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-17ccd1339eca5f57d0a3e35ecc9aa5a13bb177c7767841d61f87ea570e6b3751-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:17.996850 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:24:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:17.997028 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:24:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:17.997203939Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:17.997249875Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:17.997391924Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:17.997438674Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:18.011309327Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e8c7d1b8-77b3-4918-866a-0bc435834215 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:18.011330887Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:18.013422488Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ee450aea-919c-421e-8694-1d84c9770ee0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:18.013444282Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:18.996605 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:24:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:18.997110 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:24:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:19.996653 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:24:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:19.996819 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:19.996982209Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:19.997019763Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:19.997154772Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:19.997200767Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.017525281Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8de80b55-39ac-4b33-93ce-621b9523affd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.017550844Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.018431362Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/9d117682-ec7c-4871-99b7-5f4f0f537927 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.018453315Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:20.995915 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:24:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:20.996072 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.996276510Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.996319849Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.996365257Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:20.996396686Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:21.009659463Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/4aacca0c-b2c4-4e5b-91fe-40f8c88b17c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:21.009680161Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:21.010901678Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/d99f854f-5ccc-45e8-8f0b-1c43b919be24 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:21.010922025Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:25.996634 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:24:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:25.996967093Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:25.997007585Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:26.008784350Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/91ef61fc-73d2-47ad-8fed-f675416d9419 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:26.008811510Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897537 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897739 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897745 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897752 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897758 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897766 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:27.897772 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:24:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:28.143066746Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:24:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:31.996409 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:24:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:31.996923 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:24:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:44.001103 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:24:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:44.001649 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:45.022033460Z" level=info msg="NetworkStart: stopping network for sandbox ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:45.022357468Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/06b288c7-7edc-4e39-8b99-3240579bf449 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:45.022386686Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:45.022394354Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:45.022402475Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:52.019457514Z" level=info msg="NetworkStart: stopping network for sandbox 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:52.019642559Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/d394bc7f-0db9-4ee0-b6f9-d5cb749c1168 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:52.019666724Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:52.019675850Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:52.019683468Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:54.019978298Z" level=info msg="NetworkStart: stopping network for sandbox b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:54.020125700Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4d6dde43-cebc-42b9-97e9-d1c2387e4021 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:54.020146699Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:54.020154222Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:54.020161278Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:57.996842 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:24:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:57.997490 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.141941255Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.963552137Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.963590202Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.963554284Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.963661693Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.976462187Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.976489261Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.977495738Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.977529210Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.977759613Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:58.977794457Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount has successfully entered the 'dead' state. Jan 23 17:24:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011309576Z" level=info msg="runSandbox: deleting pod ID 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599 from idIndex" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011338548Z" level=info msg="runSandbox: removing pod sandbox 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011354167Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011366959Z" level=info msg="runSandbox: unmounting shmPath for sandbox 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011313746Z" level=info msg="runSandbox: deleting pod ID e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507 from idIndex" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011437741Z" level=info msg="runSandbox: removing pod sandbox e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011466731Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.011486297Z" level=info msg="runSandbox: unmounting shmPath for sandbox e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.019484823Z" level=info msg="runSandbox: removing pod sandbox from storage: e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.020035299Z" level=info msg="NetworkStart: stopping network for sandbox 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.020166980Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/64dd10cc-f4ca-4da6-93a4-2f0454556e5e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.020191270Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.020198387Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.020212528Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.022719773Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.022740438Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=f26c85bd-bb65-483a-9e72-f7c307e15389 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.022959 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.023008 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.023030 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.023077 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023325825Z" level=info msg="runSandbox: deleting pod ID a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22 from idIndex" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023357016Z" level=info msg="runSandbox: removing pod sandbox a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023371450Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023383576Z" level=info msg="runSandbox: unmounting shmPath for sandbox a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023326636Z" level=info msg="runSandbox: deleting pod ID d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7 from idIndex" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023448773Z" level=info msg="runSandbox: removing pod sandbox d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023465133Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023477008Z" level=info msg="runSandbox: unmounting shmPath for sandbox d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.023494377Z" level=info msg="runSandbox: removing pod sandbox from storage: 91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.027067062Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.027084182Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a36f223b-3b45-43bd-bf6d-5d26794e58a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.027304 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.027336 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.027356 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.027392 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.031319844Z" level=info msg="runSandbox: deleting pod ID 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585 from idIndex" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.031344520Z" level=info msg="runSandbox: removing pod sandbox 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.031356630Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.031368131Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.043473746Z" level=info msg="runSandbox: removing pod sandbox from storage: d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.043474742Z" level=info msg="runSandbox: removing pod sandbox from storage: a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.046678203Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.046695539Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=aec07542-23a4-47b7-9072-f143ab84833d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.046872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.046905 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.046926 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.046963 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.047440658Z" level=info msg="runSandbox: removing pod sandbox from storage: 9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.050112819Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.050132352Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=ed7bf53e-386c-47b1-8161-7e70c7159919 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.050340 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.050374 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.050394 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.050433 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.053157997Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.053175792Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e5ddd97f-d31d-4a84-89d8-68bb1478422f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.053361 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.053392 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.053411 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:24:59.053448 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:59.086701 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:59.086880 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.086950796Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.086978300Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:59.087069 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:59.087118 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:24:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:24:59.087166 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087388983Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087426000Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087468222Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087501585Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087524359Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087552977Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087610177Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.087636339Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.116992336Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/fb420add-624d-4b33-8476-3c8858bbef74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.117021029Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.118198330Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2c5e46aa-b0dc-401a-a335-963d34b0bc61 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.118225691Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.119604922Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ec16caf7-8a61-47c6-ac17-4cef3a5e8c00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.119624942Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.119916932Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5b2f6b42-b826-4ffc-b871-6caa1cacd92b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.119941272Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.121200663Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/13e9454d-2135-4d47-808c-587a551ce612 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:24:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:24:59.121233653Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-46089bb5\x2dbc87\x2d469e\x2da467\x2d29c36899cde4.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1aedbc5f\x2d7c95\x2d4e53\x2daeab\x2deb27e5e04ae6.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-73382375\x2d1fca\x2d4c19\x2db657\x2d208d49407fde.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a5a1df887dad57478b4052c03792522157a6d5078f41b6560deac2a251d5ae22-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d60667e9dd57f8eacbb56f9356fdb2834c5ac7f2bf059b42a7bf356ced413cc7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9fb7d076183ecc427639e98dbac1e7370275b80362f519bd3af13961f89c0585-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e3d7cc74\x2dbaab\x2d4402\x2d9c58\x2d0c61c48e101b.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8f032973\x2d9455\x2d419a\x2da705\x2d6cf4abd664df.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e30a0fdfa07d64c7ee0a20012c021d634f7e1b4ab07157ec8b387ec8ee8ff507-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:24:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-91f7226fb4fbb0535d816f8ef4f2d71b06c52b13a1418084eac08721c963d599-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:00.021639323Z" level=info msg="NetworkStart: stopping network for sandbox e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:00.022011160Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/4046cbf5-c229-4cab-8097-9d0396d91f79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:00.022033367Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:00.022039938Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:00.022045743Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.025587928Z" level=info msg="NetworkStart: stopping network for sandbox 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.025734655Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e8c7d1b8-77b3-4918-866a-0bc435834215 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.025755519Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.025767104Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.025773133Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.026219093Z" level=info msg="NetworkStart: stopping network for sandbox 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.026366464Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ee450aea-919c-421e-8694-1d84c9770ee0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.026394851Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.026402658Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:03.026411930Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.030492924Z" level=info msg="NetworkStart: stopping network for sandbox 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.030635163Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8de80b55-39ac-4b33-93ce-621b9523affd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.030658868Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.030666969Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.030672813Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.031103751Z" level=info msg="NetworkStart: stopping network for sandbox 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.031269358Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/9d117682-ec7c-4871-99b7-5f4f0f537927 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.031294805Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.031303164Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:05.031309971Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.023584958Z" level=info msg="NetworkStart: stopping network for sandbox 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.023719126Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/d99f854f-5ccc-45e8-8f0b-1c43b919be24 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.023741613Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.023748639Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.023755962Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.024679169Z" level=info msg="NetworkStart: stopping network for sandbox 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.024779054Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/4aacca0c-b2c4-4e5b-91fe-40f8c88b17c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.024798283Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.024804934Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:06.024811465Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1280] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1284] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1285] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1492] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1494] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1505] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1508] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1508] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1510] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1513] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:25:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494708.1517] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:25:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494709.5344] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:25:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:11.022893368Z" level=info msg="NetworkStart: stopping network for sandbox 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:11.023087210Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/91ef61fc-73d2-47ad-8fed-f675416d9419 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:11.023120242Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:11.023129423Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:11.023136045Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:12.997146 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:25:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:12.997840 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898257 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898279 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898286 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898292 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898298 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898303 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.898311 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:25:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:27.901672251Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=5c03285c-ad0a-4322-b0af-84283d419fcb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:25:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:27.901969694Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5c03285c-ad0a-4322-b0af-84283d419fcb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:27.997157 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:25:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:27.997659 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:25:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:28.143041853Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.032570111Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.032616595Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount has successfully entered the 'dead' state. Jan 23 17:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount has successfully entered the 'dead' state. Jan 23 17:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-06b288c7\x2d7edc\x2d4e39\x2d8b99\x2d3240579bf449.mount has successfully entered the 'dead' state. Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.088344871Z" level=info msg="runSandbox: deleting pod ID ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8 from idIndex" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.088383712Z" level=info msg="runSandbox: removing pod sandbox ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.088400438Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.088415371Z" level=info msg="runSandbox: unmounting shmPath for sandbox ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.100478674Z" level=info msg="runSandbox: removing pod sandbox from storage: ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.103765343Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:30.103785877Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=58a468c7-0453-433e-a555-6e6213e88d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:30.104046 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:30.104093 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:30.104117 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:30.104165 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ff8431335b8bb80653bb271a3bb2de91f03fc4f85f81e6cdd7b9ccda74dc0bb8): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.031106170Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.031138379Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount has successfully entered the 'dead' state. Jan 23 17:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount has successfully entered the 'dead' state. Jan 23 17:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d394bc7f\x2d0db9\x2d4ee0\x2db6f9\x2dd5cb749c1168.mount has successfully entered the 'dead' state. Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.073282366Z" level=info msg="runSandbox: deleting pod ID 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196 from idIndex" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.073306411Z" level=info msg="runSandbox: removing pod sandbox 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.073320634Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.073331999Z" level=info msg="runSandbox: unmounting shmPath for sandbox 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.086456689Z" level=info msg="runSandbox: removing pod sandbox from storage: 40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.092366779Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:37.092389133Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=910180ac-5f28-430c-8010-230de41d6a00 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:37.092634 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:37.092840 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:37.092863 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:25:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:37.092911 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(40bf7af2119df518f334205b235804824294ac54262669f5a59352a9f2274196): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.031224238Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.031258631Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount has successfully entered the 'dead' state. Jan 23 17:25:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount has successfully entered the 'dead' state. Jan 23 17:25:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4d6dde43\x2dcebc\x2d42b9\x2d97e9\x2dd1c2387e4021.mount has successfully entered the 'dead' state. Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.074308150Z" level=info msg="runSandbox: deleting pod ID b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac from idIndex" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.074331044Z" level=info msg="runSandbox: removing pod sandbox b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.074344613Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.074355722Z" level=info msg="runSandbox: unmounting shmPath for sandbox b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.090451662Z" level=info msg="runSandbox: removing pod sandbox from storage: b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.093943800Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:39.093962064Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=58f24c94-5296-441e-8e64-612c7c7b2cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:39.094163 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:39.094215 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:39.094241 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:39.094291 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b5aeb15d28f10d73c99f13b7a5fd17bcbc33eb7f286b6dc15f1e70d6b702dbac): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:25:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:42.996588 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:25:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:42.997079 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:25:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:43.995887 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:43.996216259Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:43.996260334Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.008811039Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/2b29c448-1478-4519-8cf4-bb06bd080a00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.008832838Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.031529168Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.031560745Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount has successfully entered the 'dead' state. Jan 23 17:25:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount has successfully entered the 'dead' state. Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.070303484Z" level=info msg="runSandbox: deleting pod ID 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7 from idIndex" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.070326562Z" level=info msg="runSandbox: removing pod sandbox 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.070338556Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.070350013Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.082453442Z" level=info msg="runSandbox: removing pod sandbox from storage: 4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.085279128Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.085297468Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d7465bba-9dbd-4863-9ee2-4bf92ffa71de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:44.085498 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:44.085537 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:25:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:44.085558 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:25:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:44.085602 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.130996252Z" level=info msg="NetworkStart: stopping network for sandbox c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131116089Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/fb420add-624d-4b33-8476-3c8858bbef74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131138604Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131147981Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131154747Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131588261Z" level=info msg="NetworkStart: stopping network for sandbox cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131688323Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/2c5e46aa-b0dc-401a-a335-963d34b0bc61 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131707396Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131714593Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.131719992Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132528596Z" level=info msg="NetworkStart: stopping network for sandbox 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132540607Z" level=info msg="NetworkStart: stopping network for sandbox 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132645733Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5b2f6b42-b826-4ffc-b871-6caa1cacd92b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132667508Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132671777Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ec16caf7-8a61-47c6-ac17-4cef3a5e8c00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132673957Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132703387Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132697475Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132787074Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.132795021Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.135861813Z" level=info msg="NetworkStart: stopping network for sandbox ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.136000188Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/13e9454d-2135-4d47-808c-587a551ce612 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.136023625Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.136031611Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:25:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:44.136038746Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-64dd10cc\x2df4ca\x2d4da6\x2d93a4\x2d2f0454556e5e.mount has successfully entered the 'dead' state. Jan 23 17:25:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4b44ae772c3f6f6f6d045373273a8e514db8e1dc8d9eb951625d10fa2e5f44d7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.033488710Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.033523573Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount has successfully entered the 'dead' state. Jan 23 17:25:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount has successfully entered the 'dead' state. Jan 23 17:25:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4046cbf5\x2dc229\x2d4cab\x2d8097\x2d9d0396d91f79.mount has successfully entered the 'dead' state. Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.074307319Z" level=info msg="runSandbox: deleting pod ID e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5 from idIndex" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.074332169Z" level=info msg="runSandbox: removing pod sandbox e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.074344701Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.074356489Z" level=info msg="runSandbox: unmounting shmPath for sandbox e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.085487498Z" level=info msg="runSandbox: removing pod sandbox from storage: e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.088886402Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:45.088903749Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=2b942c67-79d2-4f0e-89e4-240618d5875e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:45.089127 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:45.089180 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:25:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:45.089211 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:25:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:45.089261 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e90f9446ed3055560cbcfdbab7e5766ae6d12223e24f26d80902c970e8324bb5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.037631399Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.037883733Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.037810690Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.037989463Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ee450aea\x2d919c\x2d421e\x2d8694\x2d1d84c9770ee0.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e8c7d1b8\x2d77b3\x2d4918\x2d866a\x2d0bc435834215.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081292097Z" level=info msg="runSandbox: deleting pod ID 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287 from idIndex" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081319997Z" level=info msg="runSandbox: removing pod sandbox 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081333821Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081346178Z" level=info msg="runSandbox: unmounting shmPath for sandbox 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081297572Z" level=info msg="runSandbox: deleting pod ID 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74 from idIndex" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081407074Z" level=info msg="runSandbox: removing pod sandbox 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081419985Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.081431400Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.093457854Z" level=info msg="runSandbox: removing pod sandbox from storage: 917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.093485477Z" level=info msg="runSandbox: removing pod sandbox from storage: 9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.096924613Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.096942221Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a0e2232c-7354-45be-98dd-e266069bc20d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.097176 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.097223 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.097248 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.097293 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(917fd9175db3a9362058dc2573b19cdaa066c0700e80fff3e59a5723b0eec287): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.099842633Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:48.099860833Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=6ad72722-88af-4be3-8f73-b7760556d4d4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.100027 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.100064 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.100089 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:25:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:48.100138 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9501691982dc08f0257a0304df8bd99885ce905535fd1f67ece2481d35f34a74): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:25:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:49.995732 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:49.996109684Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:49.996160541Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.008289998Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/2df0337f-88a8-487d-9e7d-df816d64ea46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.008314094Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.041013824Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.041050652Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.043260315Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.043296387Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount has successfully entered the 'dead' state. Jan 23 17:25:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount has successfully entered the 'dead' state. Jan 23 17:25:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount has successfully entered the 'dead' state. Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.077307952Z" level=info msg="runSandbox: deleting pod ID 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98 from idIndex" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.077332819Z" level=info msg="runSandbox: removing pod sandbox 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.077346628Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.077358654Z" level=info msg="runSandbox: unmounting shmPath for sandbox 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.081336786Z" level=info msg="runSandbox: deleting pod ID 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b from idIndex" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.081363451Z" level=info msg="runSandbox: removing pod sandbox 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.081378721Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.081393341Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.093442937Z" level=info msg="runSandbox: removing pod sandbox from storage: 393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.096745600Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.096764871Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=32ead53e-4a01-45e1-a8a7-761e54eb80a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.096973 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.097015 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.097037 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.097085 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.097436268Z" level=info msg="runSandbox: removing pod sandbox from storage: 9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.100738319Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:50.100757863Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=6c615fdf-739e-4e9d-b056-9a6c39569b33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.100934 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.100965 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.100988 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:50.101027 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9d117682\x2dec7c\x2d4871\x2d99b7\x2d5f4f0f537927.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8de80b55\x2d39ac\x2d4b33\x2d93ce\x2d621b9523affd.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9de5950932009bb3d445158260d9761bf301c8b09351e46cffda61843372b72b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-393c63a20536b461679914b4739766fda339ead4db0f279015b3de6bab4ecf98-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.034602231Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.034632351Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.034690151Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.034727313Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d99f854f\x2d5ccc\x2d45e8\x2d8f0b\x2d1c43b919be24.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4aacca0c\x2db2c4\x2d4e5b\x2d91fe\x2d40f8c88b17c9.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079296742Z" level=info msg="runSandbox: deleting pod ID 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010 from idIndex" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079323162Z" level=info msg="runSandbox: removing pod sandbox 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079337733Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079349414Z" level=info msg="runSandbox: unmounting shmPath for sandbox 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079297875Z" level=info msg="runSandbox: deleting pod ID 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb from idIndex" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079407454Z" level=info msg="runSandbox: removing pod sandbox 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079421179Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.079433512Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.095449746Z" level=info msg="runSandbox: removing pod sandbox from storage: 671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.095450683Z" level=info msg="runSandbox: removing pod sandbox from storage: 6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.103100319Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.103130666Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=72a315be-397b-4943-904e-00ccd04b81bb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.103340 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.103388 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.103411 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.103463 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(671316fdd87500f975ba8ff63bc64521417adf2e66bb61f6c7fea8c3e19b9010): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.106649788Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.106669942Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=e5827c89-ffad-4b37-87c1-dfc5e086f042 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.106850 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.106892 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.106917 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:51.106965 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(6bab74e65f136dcb93e61d50d728d446512a88a4ab458e35228c98e1a38bddbb): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:25:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:51.996335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.996783936Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:51.996837736Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:25:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:52.010587020Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/0abc087c-6e70-4277-887f-7775be1bb33b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:52.010608756Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:55.995766 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:25:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:55.996134069Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:55.996177357Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.006761343Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7e834945-5ce7-4398-ba32-910e3df62814 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.006780648Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.034456126Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.034493075Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount has successfully entered the 'dead' state. Jan 23 17:25:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount has successfully entered the 'dead' state. Jan 23 17:25:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-91ef61fc\x2d73d2\x2d47ad\x2d8fed\x2df675416d9419.mount has successfully entered the 'dead' state. Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.076315471Z" level=info msg="runSandbox: deleting pod ID 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a from idIndex" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.076344022Z" level=info msg="runSandbox: removing pod sandbox 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.076359096Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.076375889Z" level=info msg="runSandbox: unmounting shmPath for sandbox 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.093466030Z" level=info msg="runSandbox: removing pod sandbox from storage: 497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.096275622Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:56.096294733Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0bbd79db-f2af-4c1d-9b0b-d56cd7b2079d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:56.096487 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:56.096527 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:56.096549 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:56.096596 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:56.996406 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:25:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:25:56.996948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:25:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-497f7ad9338340e4e5ef59d50ddbd9d10c1937d2542d92f2d431b92e24662c0a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:25:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:58.143141333Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:25:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:58.995794 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:25:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:58.996110831Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:58.996144490Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:25:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:59.007700795Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/6f33c2ea-2c1c-44bc-8de2-35c2b0e7a748 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:25:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:59.007719973Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:25:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:25:59.995895 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:25:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:59.996236080Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:25:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:25:59.996285198Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:00.008035760Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a812d015-a17a-44a1-8f04-218c57237585 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:00.008065670Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:01.996299 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:26:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:01.996754626Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:01.997002982Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.010848985Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/66cb08dc-f8ea-4e5b-aed2-58ef4e657fcf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.010875007Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:02.996359 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:26:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:02.996602 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.996767406Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.996812277Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.997092579Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:02.997123068Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:03.013228197Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/63167fb5-3bf8-4e9d-a694-291b7ce920ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:03.013250608Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:03.013893355Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/66917607-bf54-45c9-8bdf-0b4a5ccaef3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:03.013914825Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:04.996293 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:26:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:04.996450 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:26:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:04.996628159Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:04.996666746Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:04.996703559Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/POD" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:04.996740810Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:05.011120948Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b11b0f09-a264-427a-ae78-25b7bb3b9441 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:05.011140561Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:05.012704035Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/c97892ba-30de-4d2a-a5ce-66441db14c54 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:05.012730408Z" level=info msg="Adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:08.996311 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:26:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:08.996663778Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:08.996700873Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:08.997137 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:26:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:08.997636 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:26:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:09.008555005Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c92ba7b2-df46-45ac-88d3-e8af597ddabf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:09.008577445Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:12 hub-master-0.workload.bos2.lab conmon[123835]: conmon 628d5fe7ddffe06b8c97 : container 123846 exited with status 1 Jan 23 17:26:12 hub-master-0.workload.bos2.lab systemd[1]: crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has successfully entered the 'dead' state. Jan 23 17:26:12 hub-master-0.workload.bos2.lab systemd[1]: crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope: Consumed 3.721s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope completed and consumed the indicated resources. Jan 23 17:26:12 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope has successfully entered the 'dead' state. Jan 23 17:26:12 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464.scope completed and consumed the indicated resources. Jan 23 17:26:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:13.216487 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464" exitCode=1 Jan 23 17:26:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:13.216530 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464} Jan 23 17:26:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:13.216625 8631 scope.go:115] "RemoveContainer" containerID="4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314" Jan 23 17:26:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:13.216849 8631 scope.go:115] "RemoveContainer" containerID="628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.217299472Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=1deca41b-d02c-4955-ba08-b6c2972387ce name=/runtime.v1.ImageService/ImageStatus Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.217446191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1deca41b-d02c-4955-ba08-b6c2972387ce name=/runtime.v1.ImageService/ImageStatus Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.217662906Z" level=info msg="Removing container: 4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314" id=2da19cc3-cf8a-4d44-ba30-83a0ac9cea72 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.217850768Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=cc68d7b7-695d-4eef-86af-0a9d89031898 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.217964853Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cc68d7b7-695d-4eef-86af-0a9d89031898 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.218588093Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=002c138f-f3ed-4b13-914d-acf509d27209 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.218656790Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:13 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-f621a72e00ffa83ce9d076baf39c0497c13e89b6a90a1eb657fc477456930d27-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-f621a72e00ffa83ce9d076baf39c0497c13e89b6a90a1eb657fc477456930d27-merged.mount has successfully entered the 'dead' state. Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.259438861Z" level=info msg="Removed container 4cd7e96020d7236a9b698025462b3a5bc31964f08d1894bc3c10094dc4937314: openshift-multus/multus-cdt6c/kube-multus" id=2da19cc3-cf8a-4d44-ba30-83a0ac9cea72 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:26:13 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope. -- Subject: Unit crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:26:13 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4. -- Subject: Unit crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.376045458Z" level=info msg="Created container 3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4: openshift-multus/multus-cdt6c/kube-multus" id=002c138f-f3ed-4b13-914d-acf509d27209 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.376484659Z" level=info msg="Starting container: 3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4" id=971ec15b-889f-40d7-8b93-1aa6515160ec name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.395016944Z" level=info msg="Started container" PID=141930 containerID=3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4 description=openshift-multus/multus-cdt6c/kube-multus id=971ec15b-889f-40d7-8b93-1aa6515160ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.399519616Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_90c21fcf-e1ae-4b8a-81da-39f69ed76f0e\"" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.409793208Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.409810960Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.421803884Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.431613806Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.431630729Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:26:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:13.431642207Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_90c21fcf-e1ae-4b8a-81da-39f69ed76f0e\"" Jan 23 17:26:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:14.219716 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4} Jan 23 17:26:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:19.996598 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:26:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:19.997079 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898559 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898695 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898702 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898708 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898714 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898720 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:27.898728 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:26:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:28.143186856Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.022550168Z" level=info msg="NetworkStart: stopping network for sandbox 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.022696163Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/2b29c448-1478-4519-8cf4-bb06bd080a00 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.022719192Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.022726082Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.022732857Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142545014Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142578621Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142888816Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142921071Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142938198Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.142971375Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.143857300Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.143887918Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.147347550Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.147384011Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-13e9454d\x2d2135\x2d4d47\x2d808c\x2d587a551ce612.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5b2f6b42\x2db826\x2d4ffc\x2db871\x2d6caa1cacd92b.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ec16caf7\x2d8a61\x2d47c6\x2dac17\x2d4cef3a5e8c00.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2c5e46aa\x2db0dc\x2d401a\x2da335\x2d963d34b0bc61.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fb420add\x2d624d\x2d4b33\x2d8476\x2d3c8858bbef74.mount has successfully entered the 'dead' state. Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196412858Z" level=info msg="runSandbox: deleting pod ID c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8 from idIndex" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196442770Z" level=info msg="runSandbox: removing pod sandbox c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196457713Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196471311Z" level=info msg="runSandbox: unmounting shmPath for sandbox c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196412871Z" level=info msg="runSandbox: deleting pod ID 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e from idIndex" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196538375Z" level=info msg="runSandbox: removing pod sandbox 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196560886Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196576046Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196415678Z" level=info msg="runSandbox: deleting pod ID cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831 from idIndex" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196645338Z" level=info msg="runSandbox: removing pod sandbox cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196659011Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196835320Z" level=info msg="runSandbox: unmounting shmPath for sandbox cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196417356Z" level=info msg="runSandbox: deleting pod ID 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e from idIndex" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196922244Z" level=info msg="runSandbox: removing pod sandbox 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196938284Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196951524Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196422713Z" level=info msg="runSandbox: deleting pod ID ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72 from idIndex" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.196992594Z" level=info msg="runSandbox: removing pod sandbox ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.197006203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.197018826Z" level=info msg="runSandbox: unmounting shmPath for sandbox ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.208471650Z" level=info msg="runSandbox: removing pod sandbox from storage: 8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.212035872Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.212055100Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=3a1c7f6a-8d83-4aba-b389-1132a7ef057a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.212337 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.212389 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.212412 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.212462 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.212432137Z" level=info msg="runSandbox: removing pod sandbox from storage: ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.212507953Z" level=info msg="runSandbox: removing pod sandbox from storage: c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.212538922Z" level=info msg="runSandbox: removing pod sandbox from storage: cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.213420438Z" level=info msg="runSandbox: removing pod sandbox from storage: 20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.216022200Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.216042706Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=daa05585-6556-4417-b957-209c37533ba8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.216248 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.216279 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.216301 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.216339 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.219009768Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.219027295Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3b1bf59a-06ad-435e-b1a6-17bde5310ab2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.219256 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.219288 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.219309 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.219342 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.225498677Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.225523150Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=31d560f6-6cdd-4065-90d1-a080399cffe6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.225692 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.225722 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.225744 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.225780 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.228641074Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.228663499Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c32c0064-dea0-4177-b4c5-6aaf633d6985 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.228822 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.228855 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.228876 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:29.228913 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:29.244725 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:29.244949 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:29.244998 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.244998426Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245026663Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:29.245041 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:26:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:29.245272 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245382543Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245411396Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245487548Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245513628Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245601793Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245616705Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245668036Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.245692596Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.271704888Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/86852d82-588e-4bbb-9041-e2b7f1b5fa4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.271726200Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.272324580Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/2783fedd-e313-4dea-bd3c-ac321c6594e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.272344578Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.273199562Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/563e7a43-d752-48cf-9031-5782ea05fe7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.273225902Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.274692792Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3c219f2c-6bc0-4d4a-ae13-4ddbfbf4f358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.274715201Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.275870715Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/584e2b94-0bf0-4d2f-837c-22188dee6f3d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:29.275890823Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cb86df7d87d47d2225ad3036afda623f77c87a485e1b7dd4447260888f68b831-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:26:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ca9f1d419660e33946ccf43edec94cb871c3d23529b782657979413971238c72-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:26:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20af37aaef884d9cc3bdae2eac57652c098e5bb5bfd3c3ab4943403b1f99ab8e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:26:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8fd4494392558a3a1c0f69e3d8aeb2d1187ef2f1d4ea52c0b0a7561a9b1e405e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:26:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c630d55504e9a4be96b6826f4b6245b762c3adf572288d275b90abac66e24ef8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:26:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:31.997320 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:26:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:31.997857 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:35.021451062Z" level=info msg="NetworkStart: stopping network for sandbox c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:35.021646308Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/2df0337f-88a8-487d-9e7d-df816d64ea46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:35.021674645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:35.021681366Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:35.021688062Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:37.023249058Z" level=info msg="NetworkStart: stopping network for sandbox a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:37.023409202Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/0abc087c-6e70-4277-887f-7775be1bb33b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:37.023435712Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:37.023444942Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:37.023452402Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494798.1179] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494798.1184] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494798.1185] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494798.1492] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:26:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674494798.1494] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:26:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:41.019815662Z" level=info msg="NetworkStart: stopping network for sandbox a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:41.019980346Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/7e834945-5ce7-4398-ba32-910e3df62814 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:41.020005750Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:41.020012896Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:41.020019834Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:44.021036749Z" level=info msg="NetworkStart: stopping network for sandbox d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:44.021181235Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/6f33c2ea-2c1c-44bc-8de2-35c2b0e7a748 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:44.021219644Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:44.021227526Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:44.021237802Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:45.022518061Z" level=info msg="NetworkStart: stopping network for sandbox dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:45.022664601Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a812d015-a17a-44a1-8f04-218c57237585 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:45.022686929Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:45.022694554Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:45.022701251Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:46.996701 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:26:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:46.997333 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:26:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:47.025729742Z" level=info msg="NetworkStart: stopping network for sandbox 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:47.025904436Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/66cb08dc-f8ea-4e5b-aed2-58ef4e657fcf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:47.025929068Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:47.025936503Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:47.025942812Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026410995Z" level=info msg="NetworkStart: stopping network for sandbox 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026550023Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/66917607-bf54-45c9-8bdf-0b4a5ccaef3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026573472Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026580553Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026586811Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.026898863Z" level=info msg="NetworkStart: stopping network for sandbox 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.027018246Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/63167fb5-3bf8-4e9d-a694-291b7ce920ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.027040562Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.027047470Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:48.027053675Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.023893026Z" level=info msg="NetworkStart: stopping network for sandbox 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.024038058Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/b11b0f09-a264-427a-ae78-25b7bb3b9441 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.024060892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.024067230Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.024073858Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.026108811Z" level=info msg="NetworkStart: stopping network for sandbox 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.026253518Z" level=info msg="Got pod network &{Name:installer-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6 UID:b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 NetNS:/var/run/netns/c97892ba-30de-4d2a-a5ce-66441db14c54 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.026277315Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.026285679Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:50.026292481Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:54.026066315Z" level=info msg="NetworkStart: stopping network for sandbox a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:26:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:54.026442823Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c92ba7b2-df46-45ac-88d3-e8af597ddabf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:26:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:54.026466992Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:26:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:54.026473496Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:26:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:54.026479777Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:26:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:26:58.144003711Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:26:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:26:59.996876 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:26:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:26:59.997395 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:12.996541 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:27:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:12.997183 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.035487673Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.035523603Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount has successfully entered the 'dead' state. Jan 23 17:27:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount has successfully entered the 'dead' state. Jan 23 17:27:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2b29c448\x2d1478\x2d4519\x2d8cf4\x2dbb06bd080a00.mount has successfully entered the 'dead' state. Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.086357002Z" level=info msg="runSandbox: deleting pod ID 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be from idIndex" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.086389847Z" level=info msg="runSandbox: removing pod sandbox 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.086404322Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.086416166Z" level=info msg="runSandbox: unmounting shmPath for sandbox 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.106446919Z" level=info msg="runSandbox: removing pod sandbox from storage: 903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.109497209Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.109516014Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=db92a9f0-62a2-4472-a9f9-a290d71632cb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:14.109876 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:14.109924 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:14.109960 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:14.110012 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(903bffa1fba44df2db110a161a4e70db8b78ba85d06d0370fe1ee8f7d38bd5be): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.284741162Z" level=info msg="NetworkStart: stopping network for sandbox 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.284865431Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/86852d82-588e-4bbb-9041-e2b7f1b5fa4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.284888079Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.284895228Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.284901718Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.285143532Z" level=info msg="NetworkStart: stopping network for sandbox 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.285280509Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/2783fedd-e313-4dea-bd3c-ac321c6594e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.285305934Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.285312823Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.285319023Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.286743961Z" level=info msg="NetworkStart: stopping network for sandbox 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.286858997Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3c219f2c-6bc0-4d4a-ae13-4ddbfbf4f358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.286879971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.286886701Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.286893247Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.287190658Z" level=info msg="NetworkStart: stopping network for sandbox 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.287367636Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/563e7a43-d752-48cf-9031-5782ea05fe7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.287402424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.287415118Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.287425637Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.288669650Z" level=info msg="NetworkStart: stopping network for sandbox 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.288787852Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/584e2b94-0bf0-4d2f-837c-22188dee6f3d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.288810818Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.288817724Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:27:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:14.288823261Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.031511297Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.031552141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount has successfully entered the 'dead' state. Jan 23 17:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount has successfully entered the 'dead' state. Jan 23 17:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2df0337f\x2d88a8\x2d487d\x2d9e7d\x2ddf816d64ea46.mount has successfully entered the 'dead' state. Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.073311878Z" level=info msg="runSandbox: deleting pod ID c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250 from idIndex" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.073336144Z" level=info msg="runSandbox: removing pod sandbox c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.073351821Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.073364300Z" level=info msg="runSandbox: unmounting shmPath for sandbox c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.093446565Z" level=info msg="runSandbox: removing pod sandbox from storage: c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.097090095Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:20.097109341Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=d4db35cd-99b6-47aa-9458-8ccb50f3c65c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:20.097320 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:20.097368 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:20.097392 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:20.097441 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(c73cc549f74ec41fc78ff4cf37d863ac2ef3598090d702611ed02d4ea307f250): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.034100501Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.034136991Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount has successfully entered the 'dead' state. Jan 23 17:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount has successfully entered the 'dead' state. Jan 23 17:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0abc087c\x2d6e70\x2d4277\x2d887f\x2d7775be1bb33b.mount has successfully entered the 'dead' state. Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.075388916Z" level=info msg="runSandbox: deleting pod ID a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81 from idIndex" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.075417418Z" level=info msg="runSandbox: removing pod sandbox a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.075431325Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.075448769Z" level=info msg="runSandbox: unmounting shmPath for sandbox a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.087435105Z" level=info msg="runSandbox: removing pod sandbox from storage: a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.090923937Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:22.090941534Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=5cfcb5c0-969f-493f-8201-4a254372b0bf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:22.091163 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:22.091217 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:22.091244 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:27:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:22.091296 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(a157d886f03461665b8b4236b15e538b078809c484b7d004d0b358df52caff81): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.032045200Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.032242937Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount has successfully entered the 'dead' state. Jan 23 17:27:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount has successfully entered the 'dead' state. Jan 23 17:27:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7e834945\x2d5ce7\x2d4398\x2dba32\x2d910e3df62814.mount has successfully entered the 'dead' state. Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.067407640Z" level=info msg="runSandbox: deleting pod ID a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235 from idIndex" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.067440080Z" level=info msg="runSandbox: removing pod sandbox a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.067456705Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.067471862Z" level=info msg="runSandbox: unmounting shmPath for sandbox a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.075490762Z" level=info msg="runSandbox: removing pod sandbox from storage: a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.078919080Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:26.078937924Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=bf4fa27f-53ad-44a2-b84e-69641c263afa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:26.079159 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:26.079214 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:27:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:26.079241 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:27:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:26.079285 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a9c244781e9bf69fb573ea350f20e3dca8b8fcdff6126612ead0a966c6f75235): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899750 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899769 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899775 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899781 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899787 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899793 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.899799 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:27.997712 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:27:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:27.998251 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:28.142313099Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:27:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:28.995671 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:28.995929628Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:28.995987950Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.008495014Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a2c9dc54-93a2-4e32-87cf-9a709e042aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.008518690Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.032018457Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.032053699Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount has successfully entered the 'dead' state. Jan 23 17:27:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount has successfully entered the 'dead' state. Jan 23 17:27:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6f33c2ea\x2d2c1c\x2d44bc\x2d8de2\x2d35c2b0e7a748.mount has successfully entered the 'dead' state. Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.067307603Z" level=info msg="runSandbox: deleting pod ID d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805 from idIndex" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.067331446Z" level=info msg="runSandbox: removing pod sandbox d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.067343424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.067354827Z" level=info msg="runSandbox: unmounting shmPath for sandbox d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.079439619Z" level=info msg="runSandbox: removing pod sandbox from storage: d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.082339482Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:29.082361012Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bb9959b8-2ce3-4527-b406-3888a7dc7700 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:29.082588 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:29.082632 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:27:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:29.082655 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:27:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:29.082700 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:27:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d72caabf9ddd7e1a6d12c93d2c84571c9904be36bb25ba53973e130ac944e805-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.033699981Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.033742737Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount has successfully entered the 'dead' state. Jan 23 17:27:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount has successfully entered the 'dead' state. Jan 23 17:27:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a812d015\x2da17a\x2d44a1\x2d8f04\x2d218c57237585.mount has successfully entered the 'dead' state. Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.077453376Z" level=info msg="runSandbox: deleting pod ID dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634 from idIndex" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.077480629Z" level=info msg="runSandbox: removing pod sandbox dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.077494834Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.077506340Z" level=info msg="runSandbox: unmounting shmPath for sandbox dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.086463941Z" level=info msg="runSandbox: removing pod sandbox from storage: dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.090115949Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.090134276Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=3998fe45-0e35-402c-a0fe-45e558dff106 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:30.090398 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:30.090438 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:30.090461 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:30.090507 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dab9081e362b09b56c0079cf8cd0fb576e6f8cbfc9a45c06464403551d4b6634): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:27:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:30.996305 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.996598308Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:30.996638748Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:31.008606624Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/46e3d4dc-10b8-480f-8b0c-a7ea06166ba3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:31.008626171Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.037315657Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.037352982Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount has successfully entered the 'dead' state. Jan 23 17:27:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount has successfully entered the 'dead' state. Jan 23 17:27:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-66cb08dc\x2df8ea\x2d4e5b\x2daed2\x2d58ef4e657fcf.mount has successfully entered the 'dead' state. Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.072311973Z" level=info msg="runSandbox: deleting pod ID 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d from idIndex" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.072337351Z" level=info msg="runSandbox: removing pod sandbox 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.072351053Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.072362480Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.084432719Z" level=info msg="runSandbox: removing pod sandbox from storage: 4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.087384445Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:32.087405087Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=9e0ab9fe-ac74-4bd0-bdcc-586159e88d12 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:32.087573 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:32.087798 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:32.087831 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:32.087902 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(4f26904c61762d08d8593905df87aa24ea206a67ed76a2b43c47557de7ec5f8d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.037501396Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.037544989Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.037522615Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.037629794Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-63167fb5\x2d3bf8\x2d4e9d\x2da694\x2d291b7ce920ad.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-66917607\x2dbf54\x2d45c9\x2d8bdf\x2d0b4a5ccaef3a.mount has successfully entered the 'dead' state. Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.079286330Z" level=info msg="runSandbox: deleting pod ID 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615 from idIndex" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.079310518Z" level=info msg="runSandbox: removing pod sandbox 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.079323891Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.079335448Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.083419395Z" level=info msg="runSandbox: deleting pod ID 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b from idIndex" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.083443320Z" level=info msg="runSandbox: removing pod sandbox 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.083456875Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.083467489Z" level=info msg="runSandbox: unmounting shmPath for sandbox 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.095458783Z" level=info msg="runSandbox: removing pod sandbox from storage: 53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.095473866Z" level=info msg="runSandbox: removing pod sandbox from storage: 9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.098996425Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.099016571Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=49f40879-e46c-48a6-a618-88055fd3a294 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.099291 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.099335 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.099357 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.099402 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.102072621Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:33.102092000Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=791e8be0-a196-448c-83ff-df1c4b994a76 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.102297 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.102340 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.102362 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:27:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:33.102409 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:27:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-53529fa027715088d0bfca8896e845ea7ed42d3037d968bff5d15bbe315e287b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9a570e08abe96bd7744c3cb56b18ce52f547dd9d4529192118c797ee32416615-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:34.995789 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:34.996219118Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:34.996263492Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.008082430Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/79545c27-4517-4540-9289-c1bdb9b2dbd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.008101571Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.034815574Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.034846641Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.036453997Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6): error removing pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.036496161Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c97892ba\x2d30de\x2d4d2a\x2da5ce\x2d66441db14c54.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b11b0f09\x2da264\x2d427a\x2dae78\x2d25b7bb3b9441.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.075315296Z" level=info msg="runSandbox: deleting pod ID 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb from idIndex" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.075339718Z" level=info msg="runSandbox: removing pod sandbox 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.075352587Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.075364178Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.076285985Z" level=info msg="runSandbox: deleting pod ID 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6 from idIndex" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.076314814Z" level=info msg="runSandbox: removing pod sandbox 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.076332363Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.076345352Z" level=info msg="runSandbox: unmounting shmPath for sandbox 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.091427375Z" level=info msg="runSandbox: removing pod sandbox from storage: 4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.091446850Z" level=info msg="runSandbox: removing pod sandbox from storage: 36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.094182870Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.094202573Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=eb7d7861-5d49-42ab-a987-5085e6b898de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.094448 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.094496 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.094519 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.094568 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4d0816d4f982f1d411fd2100b1858cbac255b4a4a802309356db75c91d6951cb): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.099683103Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:35.099722431Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0" id=29117a25-3365-4cec-a633-cdb3f1e04b21 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.099957 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.100002 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.100025 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" Jan 23 17:27:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:35.100072 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(b3d56249-2e6a-43ad-a3c0-2fa37cef89b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_b3d56249-2e6a-43ad-a3c0-2fa37cef89b0_0(36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6): error adding pod openshift-kube-apiserver_installer-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 Jan 23 17:27:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-36ab96d54c1f2d76fc9c9ede4d1921151cca7d28c7c1c24f78d485a8e84f59a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.178919 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab] Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.178965 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.185438 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab] Jan 23 17:27:37 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-pod4118bc95_e963_4fc7_bb2e_ceda3fe6f298.slice. -- Subject: Unit kubepods-pod4118bc95_e963_4fc7_bb2e_ceda3fe6f298.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-pod4118bc95_e963_4fc7_bb2e_ceda3fe6f298.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.282734 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kubelet-dir\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.282807 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kube-api-access\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.383313 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kubelet-dir\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.383354 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kube-api-access\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.383417 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kubelet-dir\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.398693 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4118bc95-e963-4fc7-bb2e-ceda3fe6f298-kube-api-access\") pod \"revision-pruner-11-hub-master-0.workload.bos2.lab\" (UID: \"4118bc95-e963-4fc7-bb2e-ceda3fe6f298\") " pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.494692 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.495175673Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.495242851Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.507390911Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/30748140-b6e0-4321-85d6-4ad6f07c75c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.507416267Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:37.996688 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.997039101Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:37.997076545Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:38.007393781Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8879841a-eefe-4691-8c39-4752c6845dab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:38.007412579Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:38.996440 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:27:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:38.996975 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.038358759Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.038396305Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount has successfully entered the 'dead' state. Jan 23 17:27:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount has successfully entered the 'dead' state. Jan 23 17:27:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c92ba7b2\x2ddf46\x2d45ac\x2d88d3\x2de8af597ddabf.mount has successfully entered the 'dead' state. Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.075324015Z" level=info msg="runSandbox: deleting pod ID a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2 from idIndex" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.075348514Z" level=info msg="runSandbox: removing pod sandbox a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.075362379Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.075374652Z" level=info msg="runSandbox: unmounting shmPath for sandbox a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.088437623Z" level=info msg="runSandbox: removing pod sandbox from storage: a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.091304698Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:39.091323427Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=29863dc6-8c51-4dc4-a9c8-9020a72625d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:39.091537 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:39.091575 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:27:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:39.091599 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:27:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:39.091644 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a14cd906af48bd9672a4dfbaa2d030edf0622ff5bac353cc4e5a4c22667facf2): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:27:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:39.982877 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab] Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.103886 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access\") pod \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.103917 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock\") pod \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.103937 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir\") pod \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\" (UID: \"b3d56249-2e6a-43ad-a3c0-2fa37cef89b0\") " Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.104043 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0" (UID: "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.104068 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0" (UID: "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:27:40 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-b3d56249\x2d2e6a\x2d43ad\x2da3c0\x2d2fa37cef89b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-b3d56249\x2d2e6a\x2d43ad\x2da3c0\x2d2fa37cef89b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess.mount has successfully entered the 'dead' state. Jan 23 17:27:40 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-b3d56249\x2d2e6a\x2d43ad\x2da3c0\x2d2fa37cef89b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-b3d56249\x2d2e6a\x2d43ad\x2da3c0\x2d2fa37cef89b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess.mount completed and consumed the indicated resources. Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.119831 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0" (UID: "b3d56249-2e6a-43ad-a3c0-2fa37cef89b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.204679 8631 reconciler.go:399] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-var-lock\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.204695 8631 reconciler.go:399] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kubelet-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.204705 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0-kube-api-access\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:27:40 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice. -- Subject: Unit kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice has finished shutting down. Jan 23 17:27:40 hub-master-0.workload.bos2.lab systemd[1]: kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-podb3d56249_2e6a_43ad_a3c0_2fa37cef89b0.slice completed and consumed the indicated resources. Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.388943 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab] Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.390840 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-kube-apiserver/installer-10-hub-master-0.workload.bos2.lab] Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.995717 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:27:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:40.995937 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:27:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:40.996067764Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:40.996330755Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:40.996191343Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:40.996421538Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:41.010297908Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9e23497f-fa83-4576-b368-85b461a86c11 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:41.010326958Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:41.011908597Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/1968349c-6df1-4a85-9aac-9fabd85ccabd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:41.011927864Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:42.000084 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b3d56249-2e6a-43ad-a3c0-2fa37cef89b0 path="/var/lib/kubelet/pods/b3d56249-2e6a-43ad-a3c0-2fa37cef89b0/volumes" Jan 23 17:27:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:43.996630 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:27:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:43.996841 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:27:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:43.997017816Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:43.997069335Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:43.997130824Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:43.997172046Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:44.013379844Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d7281c7a-5a10-42a1-b2d4-d918f6b41aca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:44.013401614Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:44.014216458Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/7fa01ebf-7220-4450-8c8f-a12524ceb532 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:44.014235980Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:45.995812 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:45.996187637Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:45.996255711Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:46.007830095Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ffbe0762-65b8-4b7a-bbd7-e0aee4a38f98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:46.007852134Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:47 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00102|connmgr|INFO|br-int<->unix#2: 20 flow_mods in the 3 s starting 10 s ago (10 adds, 10 deletes) Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.380027 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab] Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.380187 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.386169 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab] Jan 23 17:27:48 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-podbf374316_9255_4614_af0e_15402ae67a30.slice. -- Subject: Unit kubepods-podbf374316_9255_4614_af0e_15402ae67a30.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-podbf374316_9255_4614_af0e_15402ae67a30.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.460199 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf374316-9255-4614-af0e-15402ae67a30-kube-api-access\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.460234 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-kubelet-dir\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.460254 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-var-lock\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.560857 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf374316-9255-4614-af0e-15402ae67a30-kube-api-access\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.560887 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-kubelet-dir\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.560908 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-var-lock\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.560967 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-var-lock\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.561012 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf374316-9255-4614-af0e-15402ae67a30-kubelet-dir\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.576647 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf374316-9255-4614-af0e-15402ae67a30-kube-api-access\") pod \"installer-11-hub-master-0.workload.bos2.lab\" (UID: \"bf374316-9255-4614-af0e-15402ae67a30\") " pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:48.695675 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:27:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:48.696122771Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:48.696189907Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:48.708349606Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/328e7375-fe74-461c-bf19-c99a9e0759e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:48.708376662Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:50.996454 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:27:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:50.996794690Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:50.996833493Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:51.007552151Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bccb2c39-618e-4562-9148-9e14a9b5aafb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:51.007572005Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:53.996600 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:27:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:53.997114 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:27:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:54.995650 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:27:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:54.996033179Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:54.996247550Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:55.008348952Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/22e7911e-3cdc-4493-8c84-76f56c953887 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:55.008370225Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:58.142485495Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.296388479Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.296431403Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.296469733Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.296531301Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298032131Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298062961Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298519200Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298555595Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298659433Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.298683634Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount has successfully entered the 'dead' state. Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337338404Z" level=info msg="runSandbox: deleting pod ID 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274 from idIndex" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337364542Z" level=info msg="runSandbox: removing pod sandbox 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337378497Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337390642Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337339874Z" level=info msg="runSandbox: deleting pod ID 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694 from idIndex" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337447349Z" level=info msg="runSandbox: removing pod sandbox 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337459975Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.337472665Z" level=info msg="runSandbox: unmounting shmPath for sandbox 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341344720Z" level=info msg="runSandbox: deleting pod ID 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318 from idIndex" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341370771Z" level=info msg="runSandbox: removing pod sandbox 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341347610Z" level=info msg="runSandbox: deleting pod ID 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d from idIndex" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341398304Z" level=info msg="runSandbox: removing pod sandbox 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341408709Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341418761Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341349143Z" level=info msg="runSandbox: deleting pod ID 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a from idIndex" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341452465Z" level=info msg="runSandbox: removing pod sandbox 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341465989Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341476925Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341587870Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.341614399Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.349442403Z" level=info msg="runSandbox: removing pod sandbox from storage: 5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.350461564Z" level=info msg="runSandbox: removing pod sandbox from storage: 94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.352665556Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.352684183Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=503a05ab-12cc-4787-90f4-9da8a0eafc07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.352938 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.352992 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.353015 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.353063 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.353463474Z" level=info msg="runSandbox: removing pod sandbox from storage: 5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.353467130Z" level=info msg="runSandbox: removing pod sandbox from storage: 5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.353473726Z" level=info msg="runSandbox: removing pod sandbox from storage: 0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.355863232Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.355881321Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c5cdf215-c66b-4b4a-abac-b5322ea6e2ba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.356139 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.356178 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.356200 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.356254 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.358892766Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.358910667Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=0b21d1c7-bf6b-4c6a-ad58-2f1479dc6545 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.359032 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.359066 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.359086 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.359125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.361844521Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.361861247Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=3381c4b6-16b9-46ad-9774-1a096e29b8b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.362069 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.362101 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.362121 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.362159 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.364605192Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.364622642Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=74bb148e-826f-4937-82a2-70f3a6d4e9c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.364784 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.364818 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.364839 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:27:59.364889 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:59.417315 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:59.417398 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:59.417523 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417615676Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417648966Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417615871Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417721921Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:59.417725 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:27:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:27:59.417849 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417874643Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417904075Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417978133Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.417999204Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.418131899Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.418161620Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.443719494Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/11e5edf1-a0eb-469e-a3c4-d611a1261111 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.443741566Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.447645615Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f8b9b0bb-ec54-4f95-b06b-6ce5a1adbb29 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.447668130Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.450344630Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f2cd8cbb-7a4b-453d-be8f-10fe6c897b07 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.450367027Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.451610258Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/8162270e-2fae-4206-a614-60daee077ffb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.451630392Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.454747502Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/420669ec-0b53-4b55-b131-475269fb0ec7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:27:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:27:59.454766655Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-584e2b94\x2d0bf0\x2d4d2f\x2d837c\x2d22188dee6f3d.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c219f2c\x2d6bc0\x2d4d4a\x2dae13\x2d4ddbfbf4f358.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-563e7a43\x2dd752\x2d48cf\x2d9031\x2d5782ea05fe7f.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2783fedd\x2de313\x2d4dea\x2dbd3c\x2dac321c6594e3.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-86852d82\x2d588e\x2d4bbb\x2d9041\x2de2b7f1b5fa4a.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5fe145eb31c07840bea0a04743dccd61fafb716b1f9fa34498e793a6e826a274-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5be23bd52c718cfa8b23f03e3fa337b111366053eb6808ca3bbcf89c7e820e2a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5725cde6200a8c7555337e4b9d3a3316750d47cdb6e54d221465b0fb937e6318-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0fae4b155fe854c8f9ae98cb165c208bf0ad650a360ac67e86e3ebdfa2b8d36d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-94b23048fab055e107b37def713271072f071c70cfbf56a98a58bdabe43fa694-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:04.997127 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:28:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:04.997832 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:14.021984712Z" level=info msg="NetworkStart: stopping network for sandbox 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:14.022142993Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a2c9dc54-93a2-4e32-87cf-9a709e042aa7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:14.022167133Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:14.022174991Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:14.022182878Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:15.996680 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.997446611Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=d5974a96-7bc7-44c8-96c5-a7f4fade846a name=/runtime.v1.ImageService/ImageStatus Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.997809536Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d5974a96-7bc7-44c8-96c5-a7f4fade846a name=/runtime.v1.ImageService/ImageStatus Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.998500834Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=e544c42b-fde1-4ce6-b1c5-1965106bfed9 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.998648223Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e544c42b-fde1-4ce6-b1c5-1965106bfed9 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.999891461Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=50118612-9419-474f-99e0-77a25ae9d79c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:28:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:15.999967813Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope. -- Subject: Unit crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.021531892Z" level=info msg="NetworkStart: stopping network for sandbox 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.021715177Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/46e3d4dc-10b8-480f-8b0c-a7ea06166ba3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.021739346Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.021746742Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.021755530Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736. -- Subject: Unit crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.117961651Z" level=info msg="Created container ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=50118612-9419-474f-99e0-77a25ae9d79c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.118507087Z" level=info msg="Starting container: ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" id=a740b298-2357-48a6-bea5-3c633af66088 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.125035576Z" level=info msg="Started container" PID=145656 containerID=ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=a740b298-2357-48a6-bea5-3c633af66088 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.130498654Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.141680056Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.141697674Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.141707804Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.149692842Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.149709319Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.149718526Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.158653751Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.158669397Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.158678843Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.166489886Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.166507400Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.166515916Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.174294794Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:28:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:16.174315348Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:28:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:16.449170 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/192.log" Jan 23 17:28:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:16.449981 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736} Jan 23 17:28:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:16.450179 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:28:16 hub-master-0.workload.bos2.lab conmon[145633]: conmon ba5484b950b0e26dbedc : container 145656 exited with status 1 Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has successfully entered the 'dead' state. Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope: Consumed 578ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope completed and consumed the indicated resources. Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope has successfully entered the 'dead' state. Jan 23 17:28:16 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope: Consumed 49ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736.scope completed and consumed the indicated resources. Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.453420 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/193.log" Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.454003 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/192.log" Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.455040 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" exitCode=1 Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.455063 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736} Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.455086 8631 scope.go:115] "RemoveContainer" containerID="f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:17.455970 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:17.455903282Z" level=info msg="Removing container: f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf" id=19bfc9bb-ba2d-4ae3-8e0a-e16325073413 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:28:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:17.456451 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:17 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-49a0e40393086d7c8e608c67ed90e4e20f32571baf30cde2fc861019385d9b3d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-49a0e40393086d7c8e608c67ed90e4e20f32571baf30cde2fc861019385d9b3d-merged.mount has successfully entered the 'dead' state. Jan 23 17:28:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:17.504110285Z" level=info msg="Removed container f2257fc741579468bf0f0f3bb407a18b10d92d22c5bddf4ed8fb8f30e7f7abbf: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=19bfc9bb-ba2d-4ae3-8e0a-e16325073413 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:28:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:18.458824 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/193.log" Jan 23 17:28:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:18.460720 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:28:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:18.463709 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:20.020782775Z" level=info msg="NetworkStart: stopping network for sandbox d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:20.020925628Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/79545c27-4517-4540-9289-c1bdb9b2dbd8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:20.020948349Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:20.020955573Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:20.020963463Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:22.520453062Z" level=info msg="NetworkStart: stopping network for sandbox da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:22.520641814Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/30748140-b6e0-4321-85d6-4ad6f07c75c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:22.520666402Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:22.520673722Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:22.520679751Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:23.019096830Z" level=info msg="NetworkStart: stopping network for sandbox 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:23.019277598Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8879841a-eefe-4691-8c39-4752c6845dab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:23.019301372Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:23.019308117Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:23.019314401Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024508756Z" level=info msg="NetworkStart: stopping network for sandbox 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024664131Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9e23497f-fa83-4576-b368-85b461a86c11 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024692156Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024699389Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024706243Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024745412Z" level=info msg="NetworkStart: stopping network for sandbox 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024892548Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/1968349c-6df1-4a85-9aac-9fabd85ccabd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024917621Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024924733Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:26.024931826Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900310 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900464 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900471 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900478 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900484 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900494 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:27.900500 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:28:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:28.143551995Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.026348762Z" level=info msg="NetworkStart: stopping network for sandbox e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.026492261Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/d7281c7a-5a10-42a1-b2d4-d918f6b41aca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.026520801Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.026528036Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.026535040Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.027344733Z" level=info msg="NetworkStart: stopping network for sandbox 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.027443627Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/7fa01ebf-7220-4450-8c8f-a12524ceb532 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.027463840Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.027471501Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:29.027477460Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:29.997198 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:28:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:29.997753 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:31.022063950Z" level=info msg="NetworkStart: stopping network for sandbox 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:31.022222159Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ffbe0762-65b8-4b7a-bbd7-e0aee4a38f98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:31.022247435Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:31.022255539Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:31.022262594Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:33.722158453Z" level=info msg="NetworkStart: stopping network for sandbox 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:33.722529781Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/328e7375-fe74-461c-bf19-c99a9e0759e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:33.722554336Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:33.722561210Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:33.722568023Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:36.020760149Z" level=info msg="NetworkStart: stopping network for sandbox fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:36.020920224Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/bccb2c39-618e-4562-9148-9e14a9b5aafb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:36.020945429Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:36.020952729Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:36.020960378Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:40.021005892Z" level=info msg="NetworkStart: stopping network for sandbox e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:40.021141291Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/22e7911e-3cdc-4493-8c84-76f56c953887 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:40.021167239Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:40.021174016Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:40.021181258Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:41.996897 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:28:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:41.997417 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.457577668Z" level=info msg="NetworkStart: stopping network for sandbox 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.457719426Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/11e5edf1-a0eb-469e-a3c4-d611a1261111 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.457742539Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.457748967Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.457756638Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.460754047Z" level=info msg="NetworkStart: stopping network for sandbox d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.460856475Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/f8b9b0bb-ec54-4f95-b06b-6ce5a1adbb29 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.460876971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.460883521Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.460889400Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.463134386Z" level=info msg="NetworkStart: stopping network for sandbox f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.463257014Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f2cd8cbb-7a4b-453d-be8f-10fe6c897b07 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.463282115Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.463290641Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.463297442Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.464862653Z" level=info msg="NetworkStart: stopping network for sandbox 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.464975896Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/8162270e-2fae-4206-a614-60daee077ffb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.464997800Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.465004452Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.465010451Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.468879584Z" level=info msg="NetworkStart: stopping network for sandbox a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.469027647Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/420669ec-0b53-4b55-b131-475269fb0ec7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.469051612Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.469059425Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:28:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:44.469067879Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:28:47 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00103|connmgr|INFO|br-int<->unix#2: 10 flow_mods 58 s ago (10 adds) Jan 23 17:28:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:28:52.996790 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:28:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:52.997350 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:28:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:58.142608024Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.034704351Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.034747150Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount has successfully entered the 'dead' state. Jan 23 17:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount has successfully entered the 'dead' state. Jan 23 17:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a2c9dc54\x2d93a2\x2d4e32\x2d87cf\x2d9a709e042aa7.mount has successfully entered the 'dead' state. Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.080413134Z" level=info msg="runSandbox: deleting pod ID 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3 from idIndex" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.080453243Z" level=info msg="runSandbox: removing pod sandbox 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.080469904Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.080486497Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.089506754Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.095806452Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:28:59.095835478Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=54bcd870-e5a8-4a70-85e5-299edaa33bd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:59.096213 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:59.096262 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:59.096288 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:28:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:28:59.096340 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3d1c16fb14f5e820529151ebe67082d1582899071ce56368403196120e132fb3): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.033909246Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.033961069Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount has successfully entered the 'dead' state. Jan 23 17:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount has successfully entered the 'dead' state. Jan 23 17:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-46e3d4dc\x2d10b8\x2d480f\x2d8b0c\x2da7ea06166ba3.mount has successfully entered the 'dead' state. Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.077410508Z" level=info msg="runSandbox: deleting pod ID 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51 from idIndex" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.077442420Z" level=info msg="runSandbox: removing pod sandbox 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.077458339Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.077480520Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.089422045Z" level=info msg="runSandbox: removing pod sandbox from storage: 6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.093690822Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:01.093712076Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=c3fd176d-ef7b-481b-8d84-67773ca41fd3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:01.093941 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:01.093986 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:01.094011 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:01.094066 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6a4c471f7e159b66005b4f4fdffbfd20867385baa7fb0516318958592bb68f51): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.032157780Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.032200089Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount has successfully entered the 'dead' state. Jan 23 17:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount has successfully entered the 'dead' state. Jan 23 17:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-79545c27\x2d4517\x2d4540\x2d9289\x2dc1bdb9b2dbd8.mount has successfully entered the 'dead' state. Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.064309616Z" level=info msg="runSandbox: deleting pod ID d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f from idIndex" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.064335625Z" level=info msg="runSandbox: removing pod sandbox d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.064350251Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.064361325Z" level=info msg="runSandbox: unmounting shmPath for sandbox d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.076419127Z" level=info msg="runSandbox: removing pod sandbox from storage: d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.080118935Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:05.080137292Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a634575c-1af7-4438-bbe8-3a14e53dcb80 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:05.080430 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:05.080470 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:05.080490 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:29:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:05.080531 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d457f1ead95ca11bc3298128516241a05a93eb8959a7d488de01c0c0b282c77f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.530837841Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.530875251Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount has successfully entered the 'dead' state. Jan 23 17:29:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount has successfully entered the 'dead' state. Jan 23 17:29:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-30748140\x2db6e0\x2d4321\x2d85d6\x2d4ad6f07c75c7.mount has successfully entered the 'dead' state. Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.572309915Z" level=info msg="runSandbox: deleting pod ID da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf from idIndex" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.572344655Z" level=info msg="runSandbox: removing pod sandbox da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.572361505Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.572373605Z" level=info msg="runSandbox: unmounting shmPath for sandbox da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.584478319Z" level=info msg="runSandbox: removing pod sandbox from storage: da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.588110739Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:07.588129233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=db72ac9d-1058-4cba-b79e-8d8c7dc182bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:07.588334 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:07.588382 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:07.588408 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:07.588460 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(da8dc020cfb43f5efc233b2c4844ac4e9d21a221f1825c0bffd5884decc514cf): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:07.996738 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:29:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:07.997233 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.029954109Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.029986796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount has successfully entered the 'dead' state. Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.071300490Z" level=info msg="runSandbox: deleting pod ID 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6 from idIndex" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.071323639Z" level=info msg="runSandbox: removing pod sandbox 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.071335219Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.071346465Z" level=info msg="runSandbox: unmounting shmPath for sandbox 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.083455426Z" level=info msg="runSandbox: removing pod sandbox from storage: 340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.086792173Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.086810073Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=f368b4fb-32dd-4a78-8266-e85335d00214 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:08.086988 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:08.087026 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:29:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:08.087048 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:29:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:08.087087 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:29:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount has successfully entered the 'dead' state. Jan 23 17:29:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8879841a\x2deefe\x2d4691\x2d8c39\x2d4752c6845dab.mount has successfully entered the 'dead' state. Jan 23 17:29:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-340fd6da7951033acfe137620be3fac8d9ed79270e85eef0d7c5826eb2e7fff6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:08.558503 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.558786942Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.558819688Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.570434837Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/07e258e4-7bc9-499e-b3f4-15a60dd2b445 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:08.570606556Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.036431572Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.036476148Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.036646574Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.036684769Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1968349c\x2d6df1\x2d4a85\x2d9aac\x2d9fabd85ccabd.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9e23497f\x2dfa83\x2d4576\x2db368\x2d85b461a86c11.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075318071Z" level=info msg="runSandbox: deleting pod ID 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62 from idIndex" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075342099Z" level=info msg="runSandbox: removing pod sandbox 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075358307Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075373442Z" level=info msg="runSandbox: unmounting shmPath for sandbox 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075361130Z" level=info msg="runSandbox: deleting pod ID 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412 from idIndex" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075460855Z" level=info msg="runSandbox: removing pod sandbox 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075479792Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.075494027Z" level=info msg="runSandbox: unmounting shmPath for sandbox 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.088464034Z" level=info msg="runSandbox: removing pod sandbox from storage: 097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.091330313Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.091349958Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=19fe68b9-c21e-4539-afd2-6a5208bdf9dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.091593 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.091633 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.091655 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.091697 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.092465810Z" level=info msg="runSandbox: removing pod sandbox from storage: 752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.096024981Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.096045103Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=6d5a11eb-c2a6-4c9e-9d0b-27ab2f4af5ac name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.096245 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.096279 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.096304 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:11.096352 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-752c13707581be16f85c7437d0680c6404a9677013dd37b355aca17cb670df62-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-097be7991d9df83935bd487849d7dab3cb3088d2f188c6ab16e7e1c2667d0412-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:11.996249 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:11.996528 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.996587222Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.996627332Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.996934534Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:11.996965884Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:12.012598762Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/462579db-ca4c-4d76-bb2c-80e10476dee1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:12.012620338Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:12.012980101Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c3b860b5-0d42-417e-8d09-b4b06f5f170d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:12.013017512Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.037381212Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.037423009Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.038170895Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.038227247Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7fa01ebf\x2d7220\x2d4450\x2d8c8f\x2da12524ceb532.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d7281c7a\x2d5a10\x2d42a1\x2db2d4\x2dd918f6b41aca.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078328290Z" level=info msg="runSandbox: deleting pod ID e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff from idIndex" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078355397Z" level=info msg="runSandbox: removing pod sandbox e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078369207Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078381258Z" level=info msg="runSandbox: unmounting shmPath for sandbox e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078334553Z" level=info msg="runSandbox: deleting pod ID 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145 from idIndex" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078445881Z" level=info msg="runSandbox: removing pod sandbox 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078458889Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.078470156Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.094428460Z" level=info msg="runSandbox: removing pod sandbox from storage: 2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.094436639Z" level=info msg="runSandbox: removing pod sandbox from storage: e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.097899315Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.097918927Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=fbd5e333-85f1-419a-9d13-756015b44392 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.098163 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.098209 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.098232 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.098280 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.100906298Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:14.100923312Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d4950d2f-3442-4aee-8f7b-24423345b97f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.101119 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.101154 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.101174 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:29:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:14.101232 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2276894547d12873b9a42ac53bc548dfb3ef95e594b9ed2389ecfb5ea0d6f145-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e8d9a2c617bd7db9914058b204fa6c02fbf78a00047da8095de3594f3443deff-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:15.995811 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:29:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:15.996114219Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:15.996152184Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.008289439Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/27c07520-c6d8-4287-a1a5-052114297677 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.008308755Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.033870916Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.033899786Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount has successfully entered the 'dead' state. Jan 23 17:29:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount has successfully entered the 'dead' state. Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.076307330Z" level=info msg="runSandbox: deleting pod ID 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040 from idIndex" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.076330832Z" level=info msg="runSandbox: removing pod sandbox 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.076342965Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.076355021Z" level=info msg="runSandbox: unmounting shmPath for sandbox 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.088427548Z" level=info msg="runSandbox: removing pod sandbox from storage: 396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.091152957Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:16.091170705Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=0864d039-0540-4b9e-80af-8cfcf2aeca27 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:16.091389 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:16.091436 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:16.091463 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:16.091514 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:29:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ffbe0762\x2d65b8\x2d4b7a\x2dbbd7\x2de0aee4a38f98.mount has successfully entered the 'dead' state. Jan 23 17:29:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-396d1ee4aa46c782bfe2c1414f290e3c02acd4fea963bd302420d18812357040-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.733741448Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.733777234Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount has successfully entered the 'dead' state. Jan 23 17:29:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount has successfully entered the 'dead' state. Jan 23 17:29:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-328e7375\x2dfe74\x2d461c\x2dbf19\x2dc99a9e0759e1.mount has successfully entered the 'dead' state. Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.783309725Z" level=info msg="runSandbox: deleting pod ID 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751 from idIndex" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.783334973Z" level=info msg="runSandbox: removing pod sandbox 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.783348037Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.783360672Z" level=info msg="runSandbox: unmounting shmPath for sandbox 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.803489643Z" level=info msg="runSandbox: removing pod sandbox from storage: 59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.806976427Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.806993278Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=f75bbdd4-6c0a-4c3e-a778-aa5579faa119 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:18.807193 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:18.807396 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:18.807420 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:18.807466 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(59ae29cf2b2aba947e04606a68a8bc4ca05b2557bd635db35a691af50c48d751): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:18.995854 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.996224222Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:18.996254746Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:18.996660 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:29:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:18.997152 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.006966012Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c9d0c390-0b7b-4e92-b0a6-90e9e11aac3e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.006985534Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:19.579298 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.579638683Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.579671842Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.589608383Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/819b6395-7186-4108-8413-16c81b4f86c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:19.589627213Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.032012468Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.032054860Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount has successfully entered the 'dead' state. Jan 23 17:29:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount has successfully entered the 'dead' state. Jan 23 17:29:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bccb2c39\x2d618e\x2d4562\x2d9148\x2d9e14a9b5aafb.mount has successfully entered the 'dead' state. Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.075317990Z" level=info msg="runSandbox: deleting pod ID fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d from idIndex" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.075346428Z" level=info msg="runSandbox: removing pod sandbox fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.075364790Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.075378250Z" level=info msg="runSandbox: unmounting shmPath for sandbox fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.095468241Z" level=info msg="runSandbox: removing pod sandbox from storage: fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.098293880Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:21.098314485Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c2b8cb97-96a3-4db0-ac80-5bc522e09d4f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:21.098546 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:21.098591 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:21.098615 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:21.098664 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(fd01211abdfb43ec140f06effe0f52d4727dedc2a1ebfbe95dc8c2d0f97d457d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:29:24 hub-master-0.workload.bos2.lab sshd[25182]: Received disconnect from 2600:52:7:18::11 port 38948:11: disconnected by user Jan 23 17:29:24 hub-master-0.workload.bos2.lab sshd[25182]: Disconnected from user core 2600:52:7:18::11 port 38948 Jan 23 17:29:24 hub-master-0.workload.bos2.lab sshd[25128]: pam_unix(sshd:session): session closed for user core Jan 23 17:29:24 hub-master-0.workload.bos2.lab systemd-logind[3052]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:29:24 hub-master-0.workload.bos2.lab systemd[1]: session-3.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-3.scope has successfully entered the 'dead' state. Jan 23 17:29:24 hub-master-0.workload.bos2.lab systemd[1]: session-3.scope: Consumed 374ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-3.scope completed and consumed the indicated resources. Jan 23 17:29:24 hub-master-0.workload.bos2.lab systemd-logind[3052]: Removed session 3. -- Subject: Session 3 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 3 has been terminated. Jan 23 17:29:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:24.996498 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:29:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:24.996720 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:29:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:24.996849761Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:24.996967819Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:24.996987588Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:24.996998580Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.018957288Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c1dec506-9dca-4de0-8fb0-0accee6fe508 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.018987471Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.019082228Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/90bc566f-a529-4639-b694-0df124e7c939 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.019101586Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.033255414Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.033293526Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount has successfully entered the 'dead' state. Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.074309987Z" level=info msg="runSandbox: deleting pod ID e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4 from idIndex" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.074333269Z" level=info msg="runSandbox: removing pod sandbox e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.074345420Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.074356227Z" level=info msg="runSandbox: unmounting shmPath for sandbox e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.086426434Z" level=info msg="runSandbox: removing pod sandbox from storage: e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.089330333Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.089347962Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a5c94577-44cc-4f93-a288-2c1afa61cbad name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:25.089818 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:25.089970 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:25.089996 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:25.090048 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:25.995500 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:29:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:25.995611 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.995923048Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.995966773Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.995999374Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:25.995975426Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.011631047Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b899cef3-cdb8-4511-bca9-24bd53a2e286 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.011650926Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.012147154Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d0be56bb-f212-4030-bf52-d35f5b5b9545 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.012170280Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount has successfully entered the 'dead' state. Jan 23 17:29:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-22e7911e\x2d3cdc\x2d4493\x2d8c84\x2d76f56c953887.mount has successfully entered the 'dead' state. Jan 23 17:29:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e4864c9e08c46142ade6e017842e6f248c8ffd181fbda203c6ab4e0ae16a68b4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:26.996168 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.996498930Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:26.996531992Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:27.007550027Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ed8c7e96-39ce-418d-9a8b-61d4c613d4d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:27.007576566Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901011 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901032 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901039 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901046 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901051 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901058 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:27.901065 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:29:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:28.142633596Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.469038171Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.469259914Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.470923937Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.470955154Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.474564985Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.474592317Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.475900130Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.475930808Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.479643594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.479672496Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount has successfully entered the 'dead' state. Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.512334425Z" level=info msg="runSandbox: deleting pod ID 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef from idIndex" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.512364758Z" level=info msg="runSandbox: removing pod sandbox 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.512382823Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.512398242Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.521276677Z" level=info msg="runSandbox: deleting pod ID d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00 from idIndex" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.521300930Z" level=info msg="runSandbox: removing pod sandbox d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.521313144Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.521326711Z" level=info msg="runSandbox: unmounting shmPath for sandbox d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524318081Z" level=info msg="runSandbox: deleting pod ID f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559 from idIndex" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524342850Z" level=info msg="runSandbox: removing pod sandbox f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524355849Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524367466Z" level=info msg="runSandbox: unmounting shmPath for sandbox f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524320647Z" level=info msg="runSandbox: deleting pod ID 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912 from idIndex" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524429657Z" level=info msg="runSandbox: removing pod sandbox 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524442341Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.524453886Z" level=info msg="runSandbox: unmounting shmPath for sandbox 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.525274202Z" level=info msg="runSandbox: deleting pod ID a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5 from idIndex" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.525297688Z" level=info msg="runSandbox: removing pod sandbox a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.525311467Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.525323135Z" level=info msg="runSandbox: unmounting shmPath for sandbox a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.529447639Z" level=info msg="runSandbox: removing pod sandbox from storage: 3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.532474422Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.532493667Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b7ab9c52-aa1e-49cd-8732-4a7bde63e67e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.532757 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.532810 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.532837 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.532888 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.533410158Z" level=info msg="runSandbox: removing pod sandbox from storage: d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.536650651Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.536669524Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=85e49bc9-a275-4233-aa3c-fd7303648c55 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.536864 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.536898 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.536918 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.536958 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.542501743Z" level=info msg="runSandbox: removing pod sandbox from storage: f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.542502897Z" level=info msg="runSandbox: removing pod sandbox from storage: a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.542511349Z" level=info msg="runSandbox: removing pod sandbox from storage: 96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.545705003Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.545724123Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1f773164-992a-44c7-8551-2d683be9742b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.545934 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.545971 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.545996 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.546045 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.548587457Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.548603987Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=5365232b-a25c-47c0-81bf-860f822730c5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.548813 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.548856 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.548878 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.548913 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.554765599Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.554788287Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=16c50e6b-d950-4eb4-9d23-acafba77bc49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.554986 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.555016 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.555037 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:29.555075 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:29.599397 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:29.599509 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:29.599641 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:29.599674 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599738749Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599770141Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:29.599771 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599854722Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599885019Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599972863Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599987340Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.599984711Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.600125820Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.600089590Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.600233509Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.623936509Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5fe15de7-fc1a-47b6-bbc1-c8186dab74e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.623958972Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.624885241Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c6152bc9-0901-4c1a-9731-dea4b6497b14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.624906793Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.626968732Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/c6624b93-a64a-466e-b65f-e7fc5b6e28dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.626990879Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.629699980Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d2374aeb-0b4b-4036-9035-e814fd1747f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.629725341Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.631529375Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/89f387b9-2e07-45d0-ada2-fef759e5045e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:29.631549232Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-420669ec\x2d0b53\x2d4b55\x2db131\x2d475269fb0ec7.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8162270e\x2d2fae\x2d4206\x2da614\x2d60daee077ffb.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f2cd8cbb\x2d7a4b\x2d453d\x2dbe8f\x2d10fe6c897b07.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a076c7cd0b438170e9a915901332e3b45bed2f318c3873745e51a8f1949365f5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f8b9b0bb\x2dec54\x2d4f95\x2db06b\x2d6ce5a1adbb29.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-96eafe49c1cf6264b6f1b46b9bac54823bc658492b9ca37e49b4bff7a573d912-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-11e5edf1\x2da0eb\x2d469e\x2da3c4\x2dd611a1261111.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f6ef302b75bc5bb1345f78f1c753af70fce192dfc7a312658335a4c2d1425559-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d7498569ee389b64300d5441d84d6b8b1aa3a712f0a14d7d1b012d2a57bb8b00-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3e4c1e9269be5b9a77547fbcc7ace02bd8e99c14c0506699449748d352add3ef-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:29:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:32.996843 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:29:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:32.997565 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:29:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:34.995772 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:29:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:34.996121903Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:34.996173780Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:34 hub-master-0.workload.bos2.lab systemd[1]: Stopping User Manager for UID 1000... -- Subject: Unit user@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopping podman-pause-f8e5da46.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopping Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:35.008171442Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/647c5129-d08e-49c7-b42f-953b3f72dc22 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:35.008200621Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped podman-pause-f8e5da46.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab sh[148218]: time="2023-01-23T17:29:35Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:29:35 hub-master-0.workload.bos2.lab sh[148218]: Error: you must provide at least one name or id Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: podman-restart.service: Control process exited, code=exited status=125 Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: podman-restart.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped Podman auto-update timer. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed Podman API Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Closed GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25135]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[25137]: pam_unix(systemd-user:session): session closed for user core Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service has successfully entered the 'dead' state. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: Stopped User Manager for UID 1000. -- Subject: Unit user@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Consumed 18.002s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service completed and consumed the indicated resources. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: Stopping User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: run-user-1000.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-1000.mount has successfully entered the 'dead' state. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: run-user-1000.mount: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-1000.mount completed and consumed the indicated resources. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service has successfully entered the 'dead' state. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: Stopped User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Consumed 3ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service completed and consumed the indicated resources. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: Removed slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished shutting down. Jan 23 17:29:35 hub-master-0.workload.bos2.lab systemd[1]: user-1000.slice: Consumed 18.385s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-1000.slice completed and consumed the indicated resources. Jan 23 17:29:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:38.995450 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:29:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:38.995949733Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:38.996103303Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:29:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:39.008465113Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2a3c27fe-9bdb-44c7-8d89-f2129cb2f7d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:39.008618918Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:29:46.996578 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:29:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:29:46.997097 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:29:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:53.583552775Z" level=info msg="NetworkStart: stopping network for sandbox 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:53.583701820Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/07e258e4-7bc9-499e-b3f4-15a60dd2b445 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:53.583724171Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:29:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:53.583731292Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:29:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:53.583737508Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.027831380Z" level=info msg="NetworkStart: stopping network for sandbox 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.027896646Z" level=info msg="NetworkStart: stopping network for sandbox f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.027985084Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/462579db-ca4c-4d76-bb2c-80e10476dee1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028010166Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028017040Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028025466Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028018277Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/c3b860b5-0d42-417e-8d09-b4b06f5f170d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028110093Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028117149Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:29:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:57.028123260Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:29:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:29:58.143734737Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:01.021181562Z" level=info msg="NetworkStart: stopping network for sandbox 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:01.021338572Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/27c07520-c6d8-4287-a1a5-052114297677 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:01.021361981Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:01.021368793Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:01.021374997Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:01.996648 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:30:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:01.997399 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.019497070Z" level=info msg="NetworkStart: stopping network for sandbox 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.019859816Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/c9d0c390-0b7b-4e92-b0a6-90e9e11aac3e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.019882811Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.019890201Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.019896285Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.603251425Z" level=info msg="NetworkStart: stopping network for sandbox bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.603382882Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/819b6395-7186-4108-8413-16c81b4f86c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.603404505Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.603410883Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:04.603417292Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1240] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1246] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1246] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1248] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1261] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:30:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495008.1267] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:30:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495009.7483] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032347693Z" level=info msg="NetworkStart: stopping network for sandbox 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032535307Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/90bc566f-a529-4639-b694-0df124e7c939 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032543567Z" level=info msg="NetworkStart: stopping network for sandbox dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032558316Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032670025Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c1dec506-9dca-4de0-8fb0-0accee6fe508 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032695656Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032703976Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032710749Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032673426Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:10.032766887Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.023985167Z" level=info msg="NetworkStart: stopping network for sandbox 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024014687Z" level=info msg="NetworkStart: stopping network for sandbox c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024131631Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b899cef3-cdb8-4511-bca9-24bd53a2e286 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024131606Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d0be56bb-f212-4030-bf52-d35f5b5b9545 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024178622Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024184951Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024190955Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024157424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024437448Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:11.024445319Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:12.020850310Z" level=info msg="NetworkStart: stopping network for sandbox 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:12.021007975Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ed8c7e96-39ce-418d-9a8b-61d4c613d4d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:12.021038347Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:12.021045475Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:12.021052352Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.637764260Z" level=info msg="NetworkStart: stopping network for sandbox 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.637910186Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c6152bc9-0901-4c1a-9731-dea4b6497b14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.637932270Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.637938928Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.637944916Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638126738Z" level=info msg="NetworkStart: stopping network for sandbox 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638278526Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/5fe15de7-fc1a-47b6-bbc1-c8186dab74e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638302841Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638309793Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638317429Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638850487Z" level=info msg="NetworkStart: stopping network for sandbox 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638964921Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/c6624b93-a64a-466e-b65f-e7fc5b6e28dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638984148Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638992443Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.638998125Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.642639383Z" level=info msg="NetworkStart: stopping network for sandbox 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.642739311Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d2374aeb-0b4b-4036-9035-e814fd1747f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.642759592Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.642766307Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.642773606Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.643684569Z" level=info msg="NetworkStart: stopping network for sandbox 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.643859547Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/89f387b9-2e07-45d0-ada2-fef759e5045e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.643896444Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.643909520Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:14.643921440Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:14.996310 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:30:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:14.996815 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.609046 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hzzwf] Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.609082 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.618919 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hzzwf] Jan 23 17:30:18 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice. -- Subject: Unit kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.773673 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.773702 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.773728 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdg6m\" (UniqueName: \"kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.773753 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874713 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-zdg6m\" (UniqueName: \"kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874742 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874768 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874785 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874846 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.874953 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.875163 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.890070 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdg6m\" (UniqueName: \"kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m\") pod \"cni-sysctl-allowlist-ds-hzzwf\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:18.924235 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:30:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:18.924573991Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-hzzwf/POD" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:18.924614625Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:18.937678744Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-hzzwf Namespace:openshift-multus ID:5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5 UID:f8b6f41a-7844-454d-bc89-62a41e96effc NetNS:/var/run/netns/b1b116b0-db4f-4206-9ea4-d8c0942f747e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:18.937699268Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-hzzwf to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:20.021756526Z" level=info msg="NetworkStart: stopping network for sandbox a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:20.021889466Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/647c5129-d08e-49c7-b42f-953b3f72dc22 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:20.021910509Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:20.021917028Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:20.021923397Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:24.021613447Z" level=info msg="NetworkStart: stopping network for sandbox c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:24.021964592Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2a3c27fe-9bdb-44c7-8d89-f2129cb2f7d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:24.021989905Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:30:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:24.021997166Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:30:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:24.022003766Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:26.996620 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:30:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:26.997132 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901508 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901525 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901533 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901539 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901545 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901551 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:27.901560 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:30:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:27.905022495Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=628335ac-8169-4ebf-b7e9-f696368189a0 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:30:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:27.905159831Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=628335ac-8169-4ebf-b7e9-f696368189a0 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:30:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:28.143292266Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:30:28 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00104|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.595709034Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.595751838Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount has successfully entered the 'dead' state. Jan 23 17:30:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount has successfully entered the 'dead' state. Jan 23 17:30:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-07e258e4\x2d7bc9\x2d499e\x2db3f4\x2d15a60dd2b445.mount has successfully entered the 'dead' state. Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.632311863Z" level=info msg="runSandbox: deleting pod ID 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93 from idIndex" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.632334582Z" level=info msg="runSandbox: removing pod sandbox 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.632347785Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.632359038Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.648517963Z" level=info msg="runSandbox: removing pod sandbox from storage: 5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.651937884Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.651957735Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=2572e1fe-a2ca-45eb-b31a-a588bafc8786 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:38.652082 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:38.652283 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:38.652308 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:38.652358 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(5384c3536100901771783ae7a0b6d7b2c80685a30129a534ca0890fecad8ca93): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:30:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:38.731559 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.731929665Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.731960979Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.742953917Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/12f82047-9e20-4d02-8593-c8db7dd07b74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:38.742973096Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:39.996368 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:30:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:39.996875 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.038291604Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.038334993Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.038718020Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.038746686Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.077307981Z" level=info msg="runSandbox: deleting pod ID f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453 from idIndex" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.077332387Z" level=info msg="runSandbox: removing pod sandbox f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.077345794Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.077357716Z" level=info msg="runSandbox: unmounting shmPath for sandbox f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.081306061Z" level=info msg="runSandbox: deleting pod ID 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5 from idIndex" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.081331062Z" level=info msg="runSandbox: removing pod sandbox 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.081343219Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.081355145Z" level=info msg="runSandbox: unmounting shmPath for sandbox 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.089468519Z" level=info msg="runSandbox: removing pod sandbox from storage: f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.092447983Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.092466461Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=05e57746-fc28-405f-8424-1f9253ee6c09 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.092599 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.092648 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.092673 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.092721 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.098461851Z" level=info msg="runSandbox: removing pod sandbox from storage: 06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.101728084Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:42.101747964Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=53c47c27-bde9-4a2b-8faf-b0fcc2880283 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.101949 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.101981 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.102004 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:42.102045 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c3b860b5\x2d0d42\x2d417e\x2d8d09\x2db4b06f5f170d.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-462579db\x2dca4c\x2d4d76\x2dbb2c\x2d80e10476dee1.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f4619e6c37affb1d30f6090a3e09f76a3a2b94f5b0be916ac7ca6ffcdf9de453-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-06654b8cfbaa1176c1c0170bb87a8e7b0701a8d9c1366213a079afd1eb2ab3c5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.032314351Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.032348905Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount has successfully entered the 'dead' state. Jan 23 17:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount has successfully entered the 'dead' state. Jan 23 17:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-27c07520\x2dc6d8\x2d4287\x2da1a5\x2d052114297677.mount has successfully entered the 'dead' state. Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.064303797Z" level=info msg="runSandbox: deleting pod ID 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17 from idIndex" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.064329498Z" level=info msg="runSandbox: removing pod sandbox 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.064343198Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.064354842Z" level=info msg="runSandbox: unmounting shmPath for sandbox 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.076441694Z" level=info msg="runSandbox: removing pod sandbox from storage: 03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.080047618Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:46.080066403Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=75dcb096-f25c-4dce-ba19-7fcdcb2e31c6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:46.080293 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:46.080334 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:46.080358 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:30:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:46.080400 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(03f8ff00725d55994a6470265b4889ed00f3b58f9696fa37261c33cd12798a17): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.030947178Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.030984171Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount has successfully entered the 'dead' state. Jan 23 17:30:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount has successfully entered the 'dead' state. Jan 23 17:30:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c9d0c390\x2d0b7b\x2d4e92\x2db0a6\x2d90e9e11aac3e.mount has successfully entered the 'dead' state. Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.084308409Z" level=info msg="runSandbox: deleting pod ID 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8 from idIndex" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.084334418Z" level=info msg="runSandbox: removing pod sandbox 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.084347798Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.084361219Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.105468266Z" level=info msg="runSandbox: removing pod sandbox from storage: 6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.109052293Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.109070105Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=705dc637-1ffd-4409-9309-fa271a0bb7ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.109313 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.109360 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.109383 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.109428 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6aecf673a7ab4fc3dcc35566e6b2353f1da4a573384dd1bbd6d01d835f88f1f8): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.614102740Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.614133386Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount has successfully entered the 'dead' state. Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.663312365Z" level=info msg="runSandbox: deleting pod ID bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa from idIndex" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.663337016Z" level=info msg="runSandbox: removing pod sandbox bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.663350091Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.663362968Z" level=info msg="runSandbox: unmounting shmPath for sandbox bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.679411722Z" level=info msg="runSandbox: removing pod sandbox from storage: bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.682892283Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.682911710Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=51e3daec-11b8-4dc4-9735-e37552d7abba name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.683113 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.683151 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.683173 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:49.683229 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:30:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:49.750364 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.750687484Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.750719476Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.761285583Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c1df152e-7e0e-4f41-998c-7627e83dd872 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:49.761307616Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount has successfully entered the 'dead' state. Jan 23 17:30:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-819b6395\x2d7186\x2d4108\x2d8413\x2d16c81b4f86c8.mount has successfully entered the 'dead' state. Jan 23 17:30:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bbbd0caad85eb7aaecff1f8fd155b44d7ff1f93f1dbd6d4bdf2dd9c2bd80c9fa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:51.996201 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:30:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:51.996700 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:30:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:52.996412 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:52.996727440Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:52.996960197Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:53.011532941Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5a74d849-f10d-4950-a4a2-18da712cd4e2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:53.011559379Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:54.996704 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:54.997026592Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:54.997070287Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.008063741Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/c2601102-4a4d-4e06-9d58-25e80945d5f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.008082556Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.043554627Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.043598157Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.044027783Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.044058827Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount has successfully entered the 'dead' state. Jan 23 17:30:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount has successfully entered the 'dead' state. Jan 23 17:30:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount has successfully entered the 'dead' state. Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.082317534Z" level=info msg="runSandbox: deleting pod ID 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e from idIndex" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.082344586Z" level=info msg="runSandbox: removing pod sandbox 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.082362180Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.082376655Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.086349989Z" level=info msg="runSandbox: deleting pod ID dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76 from idIndex" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.086379025Z" level=info msg="runSandbox: removing pod sandbox dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.086399031Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.086410373Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.098457478Z" level=info msg="runSandbox: removing pod sandbox from storage: 5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.101313195Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.101332129Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=d519babf-3700-4342-8e14-32318bc47711 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.101540 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.101578 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.101600 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.101644 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.102444393Z" level=info msg="runSandbox: removing pod sandbox from storage: dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.105539932Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:55.105558748Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=8bd68cab-842c-4f67-ba7e-c66bd675e8c8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.105661 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.105694 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.105716 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:30:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:55.105756 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-90bc566f\x2da529\x2d4639\x2db694\x2d0df124e7c939.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c1dec506\x2d9dca\x2d4de0\x2d8fb0\x2d0accee6fe508.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5eb13acdcff34b335b3d2feb302fb8bc23aea3008fa5b6cb038664ca6d61559e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dd68d2e87f2888d1d74deffed2085cae2887dd5986787a10b4220caec483de76-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.034178297Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.034231044Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.034652060Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.034682994Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount has successfully entered the 'dead' state. Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073294833Z" level=info msg="runSandbox: deleting pod ID c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284 from idIndex" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073323730Z" level=info msg="runSandbox: removing pod sandbox c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073339481Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073352426Z" level=info msg="runSandbox: unmounting shmPath for sandbox c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073299081Z" level=info msg="runSandbox: deleting pod ID 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149 from idIndex" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073411122Z" level=info msg="runSandbox: removing pod sandbox 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073424712Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.073436032Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.085448257Z" level=info msg="runSandbox: removing pod sandbox from storage: c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.085448763Z" level=info msg="runSandbox: removing pod sandbox from storage: 0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.088930104Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.088947303Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=cc7a941b-ef9b-4d82-8b84-0f29ed2b4306 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.089210 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.089255 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.089277 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.089325 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.091940878Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:56.091959564Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b3634165-e118-4a42-9d9d-2064b8af2972 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.092133 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.092163 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.092185 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:30:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:56.092226 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d0be56bb\x2df212\x2d4030\x2dbf52\x2dd35f5b5b9545.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b899cef3\x2dcdb8\x2d4511\x2dbca9\x2d24bd53a2e286.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c966e6a629166f90af613fd9bd9e7357b0ca8acb3f6db1d85b12db935a6a5284-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0ebbf4ae368eaec4b9acadb25bc8cfd6fe9f5cb4eefb6b0398ff9574d9f5e149-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.031937758Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.031974939Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ed8c7e96\x2d39ce\x2d418d\x2d9a8b\x2d61d4c613d4d5.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.062313254Z" level=info msg="runSandbox: deleting pod ID 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9 from idIndex" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.062339560Z" level=info msg="runSandbox: removing pod sandbox 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.062355339Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.062372303Z" level=info msg="runSandbox: unmounting shmPath for sandbox 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.081495541Z" level=info msg="runSandbox: removing pod sandbox from storage: 06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.084835043Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:57.084853504Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=b3a482fb-910d-455f-84d3-c7bf8ee1de37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:57.085102 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:57.085143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:57.085166 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:30:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:57.085216 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(06de120a6dbf4a5c1070a66916339c7ad8259a303671052fefbb9cf47d66cea9): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:30:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:58.141705315Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.649007672Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.649046253Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.649016483Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.649128831Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.650466691Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.650495123Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.653463886Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.653491814Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.654298334Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.654332510Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount has successfully entered the 'dead' state. Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701318331Z" level=info msg="runSandbox: deleting pod ID 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad from idIndex" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701345949Z" level=info msg="runSandbox: removing pod sandbox 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701360554Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701375271Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701319088Z" level=info msg="runSandbox: deleting pod ID 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c from idIndex" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701430051Z" level=info msg="runSandbox: removing pod sandbox 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701442964Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.701457142Z" level=info msg="runSandbox: unmounting shmPath for sandbox 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.703309921Z" level=info msg="runSandbox: deleting pod ID 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032 from idIndex" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.703332504Z" level=info msg="runSandbox: removing pod sandbox 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.703345013Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.703357133Z" level=info msg="runSandbox: unmounting shmPath for sandbox 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.711301216Z" level=info msg="runSandbox: deleting pod ID 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173 from idIndex" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.711325023Z" level=info msg="runSandbox: removing pod sandbox 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.711339222Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.711350750Z" level=info msg="runSandbox: unmounting shmPath for sandbox 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.713279956Z" level=info msg="runSandbox: deleting pod ID 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70 from idIndex" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.713310627Z" level=info msg="runSandbox: removing pod sandbox 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.713323828Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.713335517Z" level=info msg="runSandbox: unmounting shmPath for sandbox 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.721468360Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.721486265Z" level=info msg="runSandbox: removing pod sandbox from storage: 69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.721470982Z" level=info msg="runSandbox: removing pod sandbox from storage: 916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.725068890Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.725087156Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=7ac2e1e9-0643-4cf3-b1eb-e03f1b57f8e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.725380 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.725433 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.725459 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.725505 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.725446981Z" level=info msg="runSandbox: removing pod sandbox from storage: 322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.728509445Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.728530785Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2b026066-356f-412c-9751-9fe6b4802877 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.728757 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.728916 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.728983 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.729044 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.729461828Z" level=info msg="runSandbox: removing pod sandbox from storage: 880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.731482662Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.731501620Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=9190248a-fd89-4832-b2cc-022767512d71 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.731667 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.731709 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.731731 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.731775 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.734471842Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.734490191Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9793f6da-d241-49fe-af7a-d03eed67850a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.734709 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.734745 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.734766 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.734806 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.740747276Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.740770467Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=0eff4b5a-5d77-4bef-b477-2cefe5b3d63c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.741010 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.741057 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.741079 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:30:59.741120 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:59.770236 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:59.770269 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:59.770480 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770501087Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:59.770541 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770567016Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770587255Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770619602Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:30:59.770683 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770703181Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770717382Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770705127Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.770748734Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.771000372Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.771016928Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.792036373Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/7178fb3a-c156-4d47-81fd-ea37dfe1ba10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.792068574Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.796288347Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6480486-8eb0-4eab-81c7-8eef7e6a2230 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.796310559Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.798789417Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d924edab-43f4-4a11-b273-42372ce51ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.798808349Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.799858670Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c8b0fa12-abd0-42bf-9bf3-ed27bd193234 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.799878456Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.802017616Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/4782b887-5333-45ff-9890-19aad68f5332 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:30:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:30:59.802039981Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-89f387b9\x2d2e07\x2d45d0\x2dada2\x2dfef759e5045e.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d2374aeb\x2d0b4b\x2d4036\x2d9035\x2de814fd1747f5.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c6624b93\x2da64a\x2d466e\x2db65f\x2de7fc5b6e28dc.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-880554b0f5e2219de03a83f2f6891b6bd36b36be19a149e2acfd17479023dd70-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c6152bc9\x2d0901\x2d4c1a\x2d9731\x2ddea4b6497b14.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5fe15de7\x2dfc1a\x2d47b6\x2dbbc1\x2dc8186dab74e1.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7b838a57d4289b7710f36f7861495742a4866206a95f134e6a3aad540a747cad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-916243ad0ece4b29236101ba9925585ec3f72398a1c498e2112a2335300fd032-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-322422306d799598a9da46d5f8d6fbb5d9f310c6e724778783e59f9ba30b3173-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-69f59fc4c2cfaedad919da350166bcc2f6412ad2f41bcaa397c2a6da573d404c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:00.995885 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:31:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:00.996027 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:00.996308275Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:00.996350963Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:00.996401543Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:00.996430642Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:01.010011425Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/67785c3f-d4fc-4d47-a71a-97d2e76fd320 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:01.010030982Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:01.011439643Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/b460f14f-53f6-4a21-b751-27c01b12e1db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:01.011462518Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:02.996336 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:31:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:02.996886 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:03.952595132Z" level=info msg="NetworkStart: stopping network for sandbox 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:03.952739729Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-hzzwf Namespace:openshift-multus ID:5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5 UID:f8b6f41a-7844-454d-bc89-62a41e96effc NetNS:/var/run/netns/b1b116b0-db4f-4206-9ea4-d8c0942f747e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:03.952763802Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:03.952770307Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:03.952776012Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-hzzwf from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.032125094Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.032158188Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount has successfully entered the 'dead' state. Jan 23 17:31:05 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount has successfully entered the 'dead' state. Jan 23 17:31:05 hub-master-0.workload.bos2.lab systemd[1]: run-netns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-647c5129\x2dd08e\x2d49c7\x2db42f\x2d953b3f72dc22.mount has successfully entered the 'dead' state. Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.080306232Z" level=info msg="runSandbox: deleting pod ID a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e from idIndex" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.080331127Z" level=info msg="runSandbox: removing pod sandbox a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.080344475Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.080356627Z" level=info msg="runSandbox: unmounting shmPath for sandbox a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.100460429Z" level=info msg="runSandbox: removing pod sandbox from storage: a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.103679813Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:05.103699565Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=0c91bc05-07ac-4a63-9ed7-80cfa0614752 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:05.103900 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:31:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:05.103946 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:31:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:05.103969 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:31:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:05.104011 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a3f98a4a21c5c0a327799d282dd7aad2af14e4aa0fa9fd6e864235012f52604e): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:31:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:06.995986 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:31:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:06.996045 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:31:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:06.996337758Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:06.996381311Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:06.996415599Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:06.996443154Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:07.011695517Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/82c5e4c0-66d8-42fa-a969-3fea901d80d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:07.011715566Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:07.012556272Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/01c379c9-4bab-48fc-8f7e-c6f59dd7502b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:07.012576822Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.033139232Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.033197471Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount has successfully entered the 'dead' state. Jan 23 17:31:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount has successfully entered the 'dead' state. Jan 23 17:31:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a3c27fe\x2d9bdb\x2d44c7\x2d8d89\x2df2129cb2f7d2.mount has successfully entered the 'dead' state. Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.087316800Z" level=info msg="runSandbox: deleting pod ID c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add from idIndex" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.087344120Z" level=info msg="runSandbox: removing pod sandbox c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.087357664Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.087368901Z" level=info msg="runSandbox: unmounting shmPath for sandbox c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.104469703Z" level=info msg="runSandbox: removing pod sandbox from storage: c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.107367662Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.107588310Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=2db8726b-7607-4b6a-94ac-1f092eb8e655 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:09.107786 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:09.107828 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:09.107854 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:09.107901 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(c11fe39cba9a2b6aafe0c68d3d177224c1b88f793631ae10e1ed757f84534add): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:09.996103 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:31:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:09.996255 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.996432946Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.996468626Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.996560490Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:09.996589426Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.010160067Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e566bfc5-a253-485a-90c4-8d444a8722b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.010180586Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.011141720Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/546e6af5-47eb-423a-8512-82a8ab306743 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.011163136Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:10.996107 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.998955128Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:10.999005123Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:11.013817534Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/72017617-9db5-4b41-9af9-f9e5abf946e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:11.013847253Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:13.996729 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:31:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:13.997254 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:31:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:18.622959 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hzzwf] Jan 23 17:31:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:20.996059 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:31:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:20.996370155Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:20.996408159Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:21.007188139Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ea7838de-7b8d-498a-b43f-b70db7265e0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:21.007214356Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.756818574Z" level=info msg="NetworkStart: stopping network for sandbox 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.756965763Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/12f82047-9e20-4d02-8593-c8db7dd07b74 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.756987332Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.756994513Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.757000419Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:23.996487 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.996911777Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:23.996944126Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:31:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:24.008520301Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c0072120-95ef-42c9-a31c-3714e3f4b46d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:24.008545231Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:26.996756 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:31:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:26.997315 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902092 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902109 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902116 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902122 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902130 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902137 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:27.902143 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:31:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:28.141649810Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:31:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:34.772769188Z" level=info msg="NetworkStart: stopping network for sandbox 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:34.772923249Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c1df152e-7e0e-4f41-998c-7627e83dd872 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:34.772948772Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:34.772955938Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:34.772962279Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:38.025477506Z" level=info msg="NetworkStart: stopping network for sandbox f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:38.025623865Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/5a74d849-f10d-4950-a4a2-18da712cd4e2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:38.025646639Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:38.025653170Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:38.025659668Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1207] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1211] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1213] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1422] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1423] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1434] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1437] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1437] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1438] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1441] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:31:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495098.1444] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:31:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495099.6952] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:31:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:40.020878084Z" level=info msg="NetworkStart: stopping network for sandbox 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:40.021277941Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/c2601102-4a4d-4e06-9d58-25e80945d5f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:40.021304799Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:40.021311740Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:40.021318777Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:41.996983 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:31:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:41.997511 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.805292744Z" level=info msg="NetworkStart: stopping network for sandbox 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.805517807Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/7178fb3a-c156-4d47-81fd-ea37dfe1ba10 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.805543283Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.805551893Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.805558766Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.811198416Z" level=info msg="NetworkStart: stopping network for sandbox c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.811356482Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6480486-8eb0-4eab-81c7-8eef7e6a2230 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.811380424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.811386905Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.811392789Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813189148Z" level=info msg="NetworkStart: stopping network for sandbox 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813331583Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/d924edab-43f4-4a11-b273-42372ce51ace Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813356788Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813364627Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813372292Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813808462Z" level=info msg="NetworkStart: stopping network for sandbox fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813935572Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c8b0fa12-abd0-42bf-9bf3-ed27bd193234 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813961041Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813968781Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.813975411Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.815234196Z" level=info msg="NetworkStart: stopping network for sandbox cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.815358388Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/4782b887-5333-45ff-9890-19aad68f5332 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.815385365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.815392098Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:44.815398385Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.022986621Z" level=info msg="NetworkStart: stopping network for sandbox db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.023120657Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/67785c3f-d4fc-4d47-a71a-97d2e76fd320 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.023142784Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.023149714Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.023155589Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.024319426Z" level=info msg="NetworkStart: stopping network for sandbox 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.024462032Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/b460f14f-53f6-4a21-b751-27c01b12e1db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.024487853Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.024495373Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:46.024501813Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:48.963864037Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-hzzwf_openshift-multus_f8b6f41a-7844-454d-bc89-62a41e96effc_0(5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5): error removing pod openshift-multus_cni-sysctl-allowlist-ds-hzzwf from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hzzwf/f8b6f41a-7844-454d-bc89-62a41e96effc]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:48.963901291Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount has successfully entered the 'dead' state. Jan 23 17:31:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount has successfully entered the 'dead' state. Jan 23 17:31:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b1b116b0\x2ddb4f\x2d4206\x2d9ea4\x2dd8c0942f747e.mount has successfully entered the 'dead' state. Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.001304435Z" level=info msg="runSandbox: deleting pod ID 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5 from idIndex" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.001331151Z" level=info msg="runSandbox: removing pod sandbox 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.001346331Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.001358144Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.014474064Z" level=info msg="runSandbox: removing pod sandbox from storage: 5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.021836299Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-hzzwf_openshift-multus_f8b6f41a-7844-454d-bc89-62a41e96effc_0" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:49.021860801Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-hzzwf_openshift-multus_f8b6f41a-7844-454d-bc89-62a41e96effc_0" id=dc499427-adea-435f-8b33-706132afeaa4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:49.022060 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-hzzwf_openshift-multus_f8b6f41a-7844-454d-bc89-62a41e96effc_0(5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5): error adding pod openshift-multus_cni-sysctl-allowlist-ds-hzzwf to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hzzwf/f8b6f41a-7844-454d-bc89-62a41e96effc]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:31:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:49.022110 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-hzzwf_openshift-multus_f8b6f41a-7844-454d-bc89-62a41e96effc_0(5259c6a4c6801ea3aa93e79129f9cc347ab46f8c2199c3897dd0b8ebaf7f8ba5): error adding pod openshift-multus_cni-sysctl-allowlist-ds-hzzwf to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hzzwf/f8b6f41a-7844-454d-bc89-62a41e96effc]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-hzzwf" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013568 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist\") pod \"f8b6f41a-7844-454d-bc89-62a41e96effc\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:31:50.013746 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8b6f41a-7844-454d-bc89-62a41e96effc/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013771 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready\") pod \"f8b6f41a-7844-454d-bc89-62a41e96effc\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013791 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir\") pod \"f8b6f41a-7844-454d-bc89-62a41e96effc\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013811 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdg6m\" (UniqueName: \"kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m\") pod \"f8b6f41a-7844-454d-bc89-62a41e96effc\" (UID: \"f8b6f41a-7844-454d-bc89-62a41e96effc\") " Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013874 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f8b6f41a-7844-454d-bc89-62a41e96effc" (UID: "f8b6f41a-7844-454d-bc89-62a41e96effc"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013888 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f8b6f41a-7844-454d-bc89-62a41e96effc" (UID: "f8b6f41a-7844-454d-bc89-62a41e96effc"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:31:50.013957 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f8b6f41a-7844-454d-bc89-62a41e96effc/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.013985 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready" (OuterVolumeSpecName: "ready") pod "f8b6f41a-7844-454d-bc89-62a41e96effc" (UID: "f8b6f41a-7844-454d-bc89-62a41e96effc"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:31:50 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-f8b6f41a\x2d7844\x2d454d\x2dbc89\x2d62a41e96effc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzdg6m.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-f8b6f41a\x2d7844\x2d454d\x2dbc89\x2d62a41e96effc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzdg6m.mount has successfully entered the 'dead' state. Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.026655 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m" (OuterVolumeSpecName: "kube-api-access-zdg6m") pod "f8b6f41a-7844-454d-bc89-62a41e96effc" (UID: "f8b6f41a-7844-454d-bc89-62a41e96effc"). InnerVolumeSpecName "kube-api-access-zdg6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.114505 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b6f41a-7844-454d-bc89-62a41e96effc-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.114525 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b6f41a-7844-454d-bc89-62a41e96effc-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.114535 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-zdg6m\" (UniqueName: \"kubernetes.io/projected/f8b6f41a-7844-454d-bc89-62a41e96effc-kube-api-access-zdg6m\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.114544 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b6f41a-7844-454d-bc89-62a41e96effc-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:31:50 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice. -- Subject: Unit kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice has finished shutting down. Jan 23 17:31:50 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-podf8b6f41a_7844_454d_bc89_62a41e96effc.slice completed and consumed the indicated resources. Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.887289 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hzzwf] Jan 23 17:31:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:50.889765 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hzzwf] Jan 23 17:31:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:51.998198 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f8b6f41a-7844-454d-bc89-62a41e96effc path="/var/lib/kubelet/pods/f8b6f41a-7844-454d-bc89-62a41e96effc/volumes" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.024673313Z" level=info msg="NetworkStart: stopping network for sandbox d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.024807666Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/82c5e4c0-66d8-42fa-a969-3fea901d80d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.024829476Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.024837280Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.024842985Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.025336219Z" level=info msg="NetworkStart: stopping network for sandbox 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.025445228Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/01c379c9-4bab-48fc-8f7e-c6f59dd7502b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.025468277Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.025474854Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:52.025480412Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:31:52.996280 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:31:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:31:52.996779 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.023297472Z" level=info msg="NetworkStart: stopping network for sandbox 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.023644111Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e566bfc5-a253-485a-90c4-8d444a8722b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.023668788Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.023676784Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.023683398Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.025535572Z" level=info msg="NetworkStart: stopping network for sandbox 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.025680030Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/546e6af5-47eb-423a-8512-82a8ab306743 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.025737215Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.025744870Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:55.025751472Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:56.026713658Z" level=info msg="NetworkStart: stopping network for sandbox e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:31:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:56.026873587Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/72017617-9db5-4b41-9af9-f9e5abf946e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:31:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:56.026901578Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:31:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:56.026909090Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:31:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:56.026917725Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:31:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:31:58.143638061Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:32:00 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00105|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 17:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:04.996521 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:32:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:04.997376 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:32:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:06.019237257Z" level=info msg="NetworkStart: stopping network for sandbox 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:06.019414755Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ea7838de-7b8d-498a-b43f-b70db7265e0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:06.019438976Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:32:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:06.019445895Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:32:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:06.019452731Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.768443847Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.768480823Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount has successfully entered the 'dead' state. Jan 23 17:32:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount has successfully entered the 'dead' state. Jan 23 17:32:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-12f82047\x2d9e20\x2d4d02\x2d8593\x2dc8db7dd07b74.mount has successfully entered the 'dead' state. Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.823330416Z" level=info msg="runSandbox: deleting pod ID 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63 from idIndex" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.823356261Z" level=info msg="runSandbox: removing pod sandbox 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.823372670Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.823384481Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.848447160Z" level=info msg="runSandbox: removing pod sandbox from storage: 9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.852049968Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.852066877Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=7bfb8075-6f3a-416d-af33-c14ca47fb9f6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:08.852252 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:08.852298 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:08.852319 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:08.852368 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9c585ca30617c19d164b722562f1a1a701e33e1be14e7b23a6a9c60329323d63): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:32:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:08.899223 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.899523101Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.899555729Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.911133578Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/98bd3551-d642-4060-b125-a7669bd343ac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:08.911153528Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:09.021619319Z" level=info msg="NetworkStart: stopping network for sandbox 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:09.021744644Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/c0072120-95ef-42c9-a31c-3714e3f4b46d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:09.021766356Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:32:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:09.021773828Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:32:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:09.021779905Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:15.996864 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:32:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:15.997382 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.783690838Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.783734364Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount has successfully entered the 'dead' state. Jan 23 17:32:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount has successfully entered the 'dead' state. Jan 23 17:32:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c1df152e\x2d7e0e\x2d4f41\x2d998c\x2d7627e83dd872.mount has successfully entered the 'dead' state. Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.825317706Z" level=info msg="runSandbox: deleting pod ID 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92 from idIndex" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.825343899Z" level=info msg="runSandbox: removing pod sandbox 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.825357509Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.825370366Z" level=info msg="runSandbox: unmounting shmPath for sandbox 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.837455404Z" level=info msg="runSandbox: removing pod sandbox from storage: 77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.840713813Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.840733099Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=5b487876-80d4-466b-97de-29414ed420be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:19.840945 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:19.840992 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:19.841014 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:19.841061 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(77abc276f559d68c892fc3d956dd8971acb1557b27056d7ac858bcd7820fef92): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:32:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:19.922581 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.922773720Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.922805887Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.934554497Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c9abf540-64a4-4aa1-af5e-7fb59aada69a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:19.934753423Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.036038978Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.036078305Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount has successfully entered the 'dead' state. Jan 23 17:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount has successfully entered the 'dead' state. Jan 23 17:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a74d849\x2df10d\x2d4950\x2da4a2\x2d18da712cd4e2.mount has successfully entered the 'dead' state. Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.078302275Z" level=info msg="runSandbox: deleting pod ID f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74 from idIndex" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.078327523Z" level=info msg="runSandbox: removing pod sandbox f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.078341003Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.078353384Z" level=info msg="runSandbox: unmounting shmPath for sandbox f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.091453874Z" level=info msg="runSandbox: removing pod sandbox from storage: f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.094744573Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:23.094763603Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0f7cb62a-f7ec-4dd1-b7d2-f9bbac866167 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:23.094985 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:23.095027 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:23.095052 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:23.095098 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(f35a65c09e8c7c7b826e67fed2192e2cf9422dc7731874baa920897228f57e74): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.031128065Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.031166355Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount has successfully entered the 'dead' state. Jan 23 17:32:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount has successfully entered the 'dead' state. Jan 23 17:32:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c2601102\x2d4a4d\x2d4e06\x2d9d58\x2d25e80945d5f4.mount has successfully entered the 'dead' state. Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.081302653Z" level=info msg="runSandbox: deleting pod ID 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a from idIndex" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.081326307Z" level=info msg="runSandbox: removing pod sandbox 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.081339343Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.081350141Z" level=info msg="runSandbox: unmounting shmPath for sandbox 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.097444778Z" level=info msg="runSandbox: removing pod sandbox from storage: 59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.101015121Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:25.101034177Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=84c98212-7382-4f39-80fc-4ee6a4b67c0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:25.101253 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:25.101301 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:25.101325 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:25.101375 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(59b5f835567075501a017e40e1f1563a758d8489268b574f992d671c6b636b2a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903165 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903185 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903191 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903199 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903211 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903217 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:27.903223 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:32:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:28.141633014Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.818911218Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.818960294Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823104754Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823146727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823638267Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823670585Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823676587Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.823706171Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.826087450Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.826116680Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount has successfully entered the 'dead' state. Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.865328837Z" level=info msg="runSandbox: deleting pod ID 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19 from idIndex" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.865359077Z" level=info msg="runSandbox: removing pod sandbox 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.865376477Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.865391345Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869328004Z" level=info msg="runSandbox: deleting pod ID c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2 from idIndex" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869358929Z" level=info msg="runSandbox: removing pod sandbox c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869375521Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869387479Z" level=info msg="runSandbox: unmounting shmPath for sandbox c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869328837Z" level=info msg="runSandbox: deleting pod ID fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4 from idIndex" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869437982Z" level=info msg="runSandbox: removing pod sandbox fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869451376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.869464546Z" level=info msg="runSandbox: unmounting shmPath for sandbox fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.873281220Z" level=info msg="runSandbox: deleting pod ID 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a from idIndex" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.873306688Z" level=info msg="runSandbox: removing pod sandbox 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.873320434Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.873332029Z" level=info msg="runSandbox: unmounting shmPath for sandbox 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.874279554Z" level=info msg="runSandbox: deleting pod ID cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673 from idIndex" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.874304768Z" level=info msg="runSandbox: removing pod sandbox cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.874317919Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.874330726Z" level=info msg="runSandbox: unmounting shmPath for sandbox cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.881476027Z" level=info msg="runSandbox: removing pod sandbox from storage: 9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.884959965Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.884981233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=e748c922-c241-4510-b154-d5dc59ce8d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.885210 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.885261 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.885285 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.885335 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.885450490Z" level=info msg="runSandbox: removing pod sandbox from storage: c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.885455943Z" level=info msg="runSandbox: removing pod sandbox from storage: fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.888736215Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.888755283Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1d9d6993-e933-4809-bf04-b3c26efeea6d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.889021 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.889067 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.889092 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.889142 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.889445129Z" level=info msg="runSandbox: removing pod sandbox from storage: 250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.889451105Z" level=info msg="runSandbox: removing pod sandbox from storage: cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.891650168Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.891667938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=bea08b03-564d-4b7f-8f23-26faafba5cd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.891780 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.891813 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.891835 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.891874 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.894678940Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.894696424Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=c765c3f8-f958-48a1-8363-30abdef2e620 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.894875 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.894909 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.894931 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.894969 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.897654715Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.897672970Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=7a0d277a-9f1d-4ee4-a82c-9d9ba0cdf79a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.897844 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.897878 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.897899 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.897939 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.940157 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.940340 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940376611Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940410424Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.940448 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.940582 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.940631 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940765485Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940796359Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940830051Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940871490Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940918742Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.940946371Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.945572546Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.945628761Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.959433374Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/faf3955b-428c-4892-9c45-bab6fa650ea1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.959460171Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.960064282Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9ebf1e35-6608-4003-919f-56f5c0fa8606 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.960086308Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.974133540Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/972dc8b1-b173-49b7-9c63-0d64c0d681ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.974154276Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.975781135Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/0ae83ffc-db76-40e0-a18c-8e386911120a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.975799643Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.976689849Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/7e01112b-987d-4c32-bd45-54bdae7320b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:29.976711283Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:29.997342 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:32:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:29.997922 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4782b887\x2d5333\x2d45ff\x2d9890\x2d19aad68f5332.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c8b0fa12\x2dabd0\x2d42bf\x2d9bf3\x2ded27bd193234.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d924edab\x2d43f4\x2d4a11\x2db273\x2d42372ce51ace.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cf2733e4077dcd244f57d047415aa7703f9273a232f4de41ad56a2558613c673-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6480486\x2d8eb0\x2d4eab\x2d81c7\x2d8eef7e6a2230.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fe922064ad89fae45ad80eb47861357712a37d09cebdd53d474c9ac38e9f4dd4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7178fb3a\x2dc156\x2d4d47\x2d81fd\x2dea37dfe1ba10.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-250afc68fb2e8d5140c0f95e3b9339dedc7b847c0f91c8405fe83187fff1fa5a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c833ca95b6d49f9bdfe5b0e4713f9226bb9eb7386628cecd92fcb1543ae285b2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9f704278eba4d93b1429ebb3ff5d8d4e90e22d1f762ecc082be095121ee0fb19-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.033961672Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.033996053Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.036529894Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.036567269Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.086308241Z" level=info msg="runSandbox: deleting pod ID db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c from idIndex" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.086336161Z" level=info msg="runSandbox: removing pod sandbox db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.086351579Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.086362535Z" level=info msg="runSandbox: unmounting shmPath for sandbox db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.087313564Z" level=info msg="runSandbox: deleting pod ID 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03 from idIndex" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.087340340Z" level=info msg="runSandbox: removing pod sandbox 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.087357271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.087371793Z" level=info msg="runSandbox: unmounting shmPath for sandbox 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.105473144Z" level=info msg="runSandbox: removing pod sandbox from storage: db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.105479072Z" level=info msg="runSandbox: removing pod sandbox from storage: 79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.108709267Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.108729692Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b347f36c-49be-417b-870a-0499a105b059 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.108947 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.108994 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.109022 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.109076 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.111927682Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:31.111945678Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=0e4e8c88-0e7d-49b0-b8b0-f9f47af2583e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.112139 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.112171 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.112192 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:32:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:31.112237 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b460f14f\x2d53f6\x2d4a21\x2db751\x2d27c01b12e1db.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-67785c3f\x2dd4fc\x2d4d47\x2da71a\x2d97d2e76fd320.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-79f64b7078e1703a9851bfd14301acb9575c54a79040f548331ff9f87952ed03-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-db9243ad7096b12d419b3090b41c920599d2971e2dac36598914f44f52de120c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:36.995425 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:36.995732008Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:36.995770348Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.007309518Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/42bc810b-d04d-4cc1-97ba-4338fcea7ee4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.007332482Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.035945121Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.035973748Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.036037755Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.036714803Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount has successfully entered the 'dead' state. Jan 23 17:32:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount has successfully entered the 'dead' state. Jan 23 17:32:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount has successfully entered the 'dead' state. Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078322074Z" level=info msg="runSandbox: deleting pod ID d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7 from idIndex" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078355025Z" level=info msg="runSandbox: removing pod sandbox d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078368412Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078377858Z" level=info msg="runSandbox: unmounting shmPath for sandbox d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078322671Z" level=info msg="runSandbox: deleting pod ID 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0 from idIndex" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078437730Z" level=info msg="runSandbox: removing pod sandbox 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078449632Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.078459444Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.090417892Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.091421477Z" level=info msg="runSandbox: removing pod sandbox from storage: d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.093686040Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.093704523Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=9696712a-140e-4f93-bf2d-5fcc73316ebe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.093950 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.094149 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.094172 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.094238 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.096801813Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:37.096820091Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=a8b29763-3635-416d-9d0c-c20c304201a0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.097070 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.097114 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.097138 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:32:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:37.097182 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:32:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-01c379c9\x2d4bab\x2d48fc\x2d8f7e\x2dc6f59dd7502b.mount has successfully entered the 'dead' state. Jan 23 17:32:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount has successfully entered the 'dead' state. Jan 23 17:32:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-82c5e4c0\x2d66d8\x2d42fa\x2da969\x2d3fea901d80d5.mount has successfully entered the 'dead' state. Jan 23 17:32:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b6c06943a536ce495a3ef18787657ab5978f40a63598f605f7b95352da8f0d0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d6c95edff9b8687738393c9ac1965018dc85b39d5f59544918c843652f5772b7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:38.996062 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:38.996402919Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:38.996442163Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:39.008043448Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/302227e0-f750-4857-b378-54e60d6849a7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:39.008064311Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.035176729Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.035221088Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.036886317Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.036925659Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-546e6af5\x2d47eb\x2d423a\x2d8512\x2d82a8ab306743.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e566bfc5\x2da253\x2d485a\x2d90c4\x2d8d444a8722b2.mount has successfully entered the 'dead' state. Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099335835Z" level=info msg="runSandbox: deleting pod ID 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b from idIndex" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099369929Z" level=info msg="runSandbox: removing pod sandbox 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099386169Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099399540Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099336007Z" level=info msg="runSandbox: deleting pod ID 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071 from idIndex" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099450007Z" level=info msg="runSandbox: removing pod sandbox 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099463141Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.099477791Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.115504068Z" level=info msg="runSandbox: removing pod sandbox from storage: 84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.115505650Z" level=info msg="runSandbox: removing pod sandbox from storage: 3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.118671915Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.118693266Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=3741b856-16c9-4292-9610-ce54fc5aa3ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.118933 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.118983 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.119010 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.119066 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.121760139Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:40.121778719Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=ec03ce93-a7d6-4532-86a7-175127464b29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.121993 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.122033 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.122055 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:32:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:40.122104 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.037751387Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.037793508Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-84499514856ed8fef7cc4a833e06d188a96580a0516c7aca1ef949f638e4868b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3441485070ea63db92c45ec7be83e89af25a32accba45e4e9ba7f84fc9a3e071-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-72017617\x2d9db5\x2d4b41\x2d9af9\x2df9e5abf946e0.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.084281414Z" level=info msg="runSandbox: deleting pod ID e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7 from idIndex" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.084306741Z" level=info msg="runSandbox: removing pod sandbox e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.084325425Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.084339202Z" level=info msg="runSandbox: unmounting shmPath for sandbox e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.104463928Z" level=info msg="runSandbox: removing pod sandbox from storage: e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.107818631Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.107836993Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=bab01234-d244-417d-a83b-682c250247eb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:41.108045 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:41.108089 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:41.108112 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:41.108157 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e7f640bd202109a2d38fe1204849032d8d0f10f7d8ff3125bfd302f14a7679e7): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:41.996060 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:32:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:41.996134 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.996522530Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.996568793Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.996609437Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:41.996575324Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:42.011312606Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/27dc097d-1bf0-4dd1-be76-5ec0ecea1225 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:42.011336549Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:42.012737080Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/75f01d33-2456-4f04-a1bc-c1459c0be051 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:42.012760397Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:44.996112 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:32:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:44.996622 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:32:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:49.996201 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:32:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:49.996533601Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:49.996574217Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:50.007343942Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/edfddadf-a9e9-4e75-a11a-fd23a9bf491e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:50.007367600Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.029836369Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.030070613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount has successfully entered the 'dead' state. Jan 23 17:32:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount has successfully entered the 'dead' state. Jan 23 17:32:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ea7838de\x2d7b8d\x2d498a\x2db43f\x2db70db7265e0e.mount has successfully entered the 'dead' state. Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.072303582Z" level=info msg="runSandbox: deleting pod ID 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01 from idIndex" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.072328187Z" level=info msg="runSandbox: removing pod sandbox 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.072342537Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.072354500Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.092429533Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.095314998Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.095333187Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=a3b07a44-feec-42b7-adaa-068be2fbeb94 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:51.095509 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:51.095550 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:51.095572 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:51.095616 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(9b21980451b54cc9246b4ea9b68dd7e8b361ab4a9ef71f3f3be0fa00adfc6f01): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:32:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:51.995515 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.995823869Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:51.995859059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.006270692Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/29b1c5da-19b6-4c1f-b489-2501c7035744 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.006290744Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:52.996146 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:32:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:52.996244 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.996482763Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.996528692Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.996551304Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:52.996584880Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.011098790Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/2bb29091-a0e2-4d62-a95b-16e02603ee26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.011125112Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.011663703Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/6d2fc1dc-5fdc-4dee-ac3e-455c24e06978 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.011685296Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.923895112Z" level=info msg="NetworkStart: stopping network for sandbox 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.924038280Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/98bd3551-d642-4060-b125-a7669bd343ac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.924061707Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.924069193Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:32:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:53.924075412Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.032156022Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.032193770Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount has successfully entered the 'dead' state. Jan 23 17:32:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount has successfully entered the 'dead' state. Jan 23 17:32:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c0072120\x2d95ef\x2d42c9\x2da31c\x2d3714e3f4b46d.mount has successfully entered the 'dead' state. Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.077284311Z" level=info msg="runSandbox: deleting pod ID 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43 from idIndex" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.077310469Z" level=info msg="runSandbox: removing pod sandbox 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.077326038Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.077339886Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.093425389Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.096720163Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.096740523Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=930d625a-6b10-47c1-822e-2476ac1931c3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:54.096980 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:32:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:54.097033 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:32:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:54.097057 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:32:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:54.097110 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(2a2e690d37e2402658348509adfcae04e5bb2e26ba6dae05b76c3f8e3033bb43): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:32:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:54.996335 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.996642572Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:32:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:54.996683384Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:32:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:55.008165643Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b9f47646-b7c2-4dd2-b09c-3afc66b24abc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:32:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:55.008192730Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:32:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:32:56.996969 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:32:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:32:56.997606 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:32:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:32:58.142579034Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:33:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:04.948303489Z" level=info msg="NetworkStart: stopping network for sandbox 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:04.948452639Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c9abf540-64a4-4aa1-af5e-7fb59aada69a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:04.948475825Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:04.948482246Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:04.948489340Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:06.995955 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:33:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:06.996300092Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:06.996343869Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:33:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:07.007962437Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/53cebc50-adb8-4a1c-bcd3-e3d15103a1ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:07.007985493Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495188.1203] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495188.1208] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495188.1209] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495188.1472] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:33:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495188.1474] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:33:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:09.996074 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:33:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:09.996519 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:33:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:09.996515592Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:09.996564308Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:33:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:09.997041 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:33:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:10.007647042Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/1e4b7f3b-fa5c-4e30-8cf8-e52e0b9c1f4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:10.007668313Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972179173Z" level=info msg="NetworkStart: stopping network for sandbox 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972384535Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/faf3955b-428c-4892-9c45-bab6fa650ea1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972410113Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972416606Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972422674Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972492317Z" level=info msg="NetworkStart: stopping network for sandbox a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972643721Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9ebf1e35-6608-4003-919f-56f5c0fa8606 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972670964Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972678141Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.972685101Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.986929669Z" level=info msg="NetworkStart: stopping network for sandbox 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.987057470Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/972dc8b1-b173-49b7-9c63-0d64c0d681ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.987082475Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.987089347Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.987096055Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.987984148Z" level=info msg="NetworkStart: stopping network for sandbox 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988099647Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/7e01112b-987d-4c32-bd45-54bdae7320b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988122562Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988130258Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988138107Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988567682Z" level=info msg="NetworkStart: stopping network for sandbox 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988698915Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/0ae83ffc-db76-40e0-a18c-8e386911120a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988721531Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988728769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:14.988735263Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:22.020980628Z" level=info msg="NetworkStart: stopping network for sandbox bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:22.021137385Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/42bc810b-d04d-4cc1-97ba-4338fcea7ee4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:22.021159117Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:22.021165335Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:22.021171852Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:23.997122 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:23.997910529Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=cba631e4-13fe-4990-9f13-bf0b89da1381 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:23.998041407Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cba631e4-13fe-4990-9f13-bf0b89da1381 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:33:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:23.998583434Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=ed253886-788c-4f83-a2ce-2a195f08e8ca name=/runtime.v1.ImageService/ImageStatus Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.001024943Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ed253886-788c-4f83-a2ce-2a195f08e8ca name=/runtime.v1.ImageService/ImageStatus Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.002696335Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=0e1043a1-890b-44d1-a8ce-a4b0123b417c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.002771775Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.020271629Z" level=info msg="NetworkStart: stopping network for sandbox 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.020399922Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/302227e0-f750-4857-b378-54e60d6849a7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.020422016Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.020428701Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.020434632Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope. -- Subject: Unit crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca. -- Subject: Unit crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.112879152Z" level=info msg="Created container 98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=0e1043a1-890b-44d1-a8ce-a4b0123b417c name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.113417809Z" level=info msg="Starting container: 98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" id=5c529905-b6e4-4be3-8f51-a6cb779ec25b name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.132285128Z" level=info msg="Started container" PID=155016 containerID=98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=5c529905-b6e4-4be3-8f51-a6cb779ec25b name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.137783375Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.147704145Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.147725599Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.147736244Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.156249965Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.156271168Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.156283835Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.164929934Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.164947041Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.164955837Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.173611447Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.173626835Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.173637430Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.181853664Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:33:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:24.181870503Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:33:24 hub-master-0.workload.bos2.lab conmon[154993]: conmon 98f5e27fc85c63cedbcf : container 155016 exited with status 1 Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has successfully entered the 'dead' state. Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope: Consumed 561ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope completed and consumed the indicated resources. Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope has successfully entered the 'dead' state. Jan 23 17:33:24 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope: Consumed 47ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca.scope completed and consumed the indicated resources. Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.047318 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/194.log" Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.047892 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/193.log" Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.049117 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" exitCode=1 Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.049141 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca} Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.049162 8631 scope.go:115] "RemoveContainer" containerID="ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.050012 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:25.050033126Z" level=info msg="Removing container: ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736" id=e807fce7-c4d0-4156-8530-b6cb165bf008 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:25.050520 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:33:25 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-135ba2e892d44dfa223374435d3ac2dbf8aa4d9379bf7796a754687b55f3e6ca-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-135ba2e892d44dfa223374435d3ac2dbf8aa4d9379bf7796a754687b55f3e6ca-merged.mount has successfully entered the 'dead' state. Jan 23 17:33:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:25.077291419Z" level=info msg="Removed container ba5484b950b0e26dbedcd4c47b8183e104406094a183839c66757ed5afb08736: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=e807fce7-c4d0-4156-8530-b6cb165bf008 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:33:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:25.668344 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:33:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:26.052072 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/194.log" Jan 23 17:33:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:26.053936 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:33:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:26.054434 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.024757257Z" level=info msg="NetworkStart: stopping network for sandbox b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.024899966Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/27dc097d-1bf0-4dd1-be76-5ec0ecea1225 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.024923487Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.024930279Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.024936367Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.027554203Z" level=info msg="NetworkStart: stopping network for sandbox b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.027698067Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/75f01d33-2456-4f04-a1bc-c1459c0be051 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.027723042Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.027731263Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:27.027738390Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903371 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903392 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903398 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903404 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903410 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903416 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:27.903423 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:33:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:28.142969099Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:33:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:35.019102722Z" level=info msg="NetworkStart: stopping network for sandbox 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:35.019268252Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/edfddadf-a9e9-4e75-a11a-fd23a9bf491e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:35.019295492Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:35.019303042Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:35.019309130Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:37.018381177Z" level=info msg="NetworkStart: stopping network for sandbox f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:37.018517153Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/29b1c5da-19b6-4c1f-b489-2501c7035744 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:37.018540602Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:37.018547082Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:37.018553368Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:37.997237 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:33:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:37.997742 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.024092094Z" level=info msg="NetworkStart: stopping network for sandbox f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.024269155Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/2bb29091-a0e2-4d62-a95b-16e02603ee26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.024299665Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.024309117Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.024316449Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.025104534Z" level=info msg="NetworkStart: stopping network for sandbox 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.025248806Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/6d2fc1dc-5fdc-4dee-ac3e-455c24e06978 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.025274979Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.025282680Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.025290934Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.935938511Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.935976423Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount has successfully entered the 'dead' state. Jan 23 17:33:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount has successfully entered the 'dead' state. Jan 23 17:33:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-98bd3551\x2dd642\x2d4060\x2db125\x2da7669bd343ac.mount has successfully entered the 'dead' state. Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.984307140Z" level=info msg="runSandbox: deleting pod ID 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61 from idIndex" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.984331894Z" level=info msg="runSandbox: removing pod sandbox 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.984347213Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:38.984360791Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.007428882Z" level=info msg="runSandbox: removing pod sandbox from storage: 1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.011126777Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.011143868Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=836097b6-8fac-4bda-8ae2-3fd426960794 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:39.011401 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:33:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:39.011446 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:39.011468 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:39.011515 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1bd083a5cb85ba07198d0efafa8a0cb8fe7d57fdb06b53810a2889fcecb47e61): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:33:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:39.076216 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.076507672Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.076537259Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.088045753Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b4c35e7f-336b-4a84-8a8b-c143c4360bbf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:39.088065493Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:40.021631882Z" level=info msg="NetworkStart: stopping network for sandbox 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:40.021771029Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/b9f47646-b7c2-4dd2-b09c-3afc66b24abc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:40.021792143Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:40.021799484Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:40.021805513Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:49.960277315Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:49.960315907Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount has successfully entered the 'dead' state. Jan 23 17:33:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount has successfully entered the 'dead' state. Jan 23 17:33:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c9abf540\x2d64a4\x2d4aa1\x2daf5e\x2d7fb59aada69a.mount has successfully entered the 'dead' state. Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.005355408Z" level=info msg="runSandbox: deleting pod ID 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa from idIndex" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.005381837Z" level=info msg="runSandbox: removing pod sandbox 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.005397221Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.005417084Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.024454968Z" level=info msg="runSandbox: removing pod sandbox from storage: 7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.031884349Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.031907361Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=6f93901c-6ded-4953-8047-952be77de60c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:50.032139 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:50.032311 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:50.032337 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:50.032394 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7ac53197b0b8a7fa94226309de955d6a79cd461a634f72096f42b74c62a847aa): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:50.094565 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.094871239Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.094901655Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.105506399Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/464cc3b2-f67d-4454-b358-7f23d46a2a42 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:50.105525884Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:33:50.996850 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:33:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:33:50.997362 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:33:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:52.021382562Z" level=info msg="NetworkStart: stopping network for sandbox 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:52.021542553Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/53cebc50-adb8-4a1c-bcd3-e3d15103a1ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:52.021563783Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:52.021571262Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:52.021578517Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:55.020250964Z" level=info msg="NetworkStart: stopping network for sandbox 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:55.020398712Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/1e4b7f3b-fa5c-4e30-8cf8-e52e0b9c1f4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:33:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:55.020422773Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:33:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:55.020429091Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:33:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:55.020435464Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:33:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:58.142677937Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.983635359Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.983671258Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.984462324Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.984488508Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount has successfully entered the 'dead' state. Jan 23 17:33:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount has successfully entered the 'dead' state. Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.997533594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.997567487Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.998863221Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:33:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:33:59.998900010Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount has successfully entered the 'dead' state. Jan 23 17:34:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount has successfully entered the 'dead' state. Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.000856989Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.000885541Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount has successfully entered the 'dead' state. Jan 23 17:34:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount has successfully entered the 'dead' state. Jan 23 17:34:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount has successfully entered the 'dead' state. Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.038315929Z" level=info msg="runSandbox: deleting pod ID a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb from idIndex" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.038339669Z" level=info msg="runSandbox: removing pod sandbox a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.038353052Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.038365438Z" level=info msg="runSandbox: unmounting shmPath for sandbox a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.039307369Z" level=info msg="runSandbox: deleting pod ID 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044 from idIndex" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.039330937Z" level=info msg="runSandbox: removing pod sandbox 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.039342383Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.039353006Z" level=info msg="runSandbox: unmounting shmPath for sandbox 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.045311489Z" level=info msg="runSandbox: deleting pod ID 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6 from idIndex" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.045341031Z" level=info msg="runSandbox: removing pod sandbox 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.045355334Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.045369798Z" level=info msg="runSandbox: unmounting shmPath for sandbox 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.046285132Z" level=info msg="runSandbox: deleting pod ID 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406 from idIndex" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.046308873Z" level=info msg="runSandbox: removing pod sandbox 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.046321292Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.046332296Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049312471Z" level=info msg="runSandbox: deleting pod ID 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e from idIndex" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049335516Z" level=info msg="runSandbox: removing pod sandbox 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049347300Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049358734Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049459191Z" level=info msg="runSandbox: removing pod sandbox from storage: 894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.049459744Z" level=info msg="runSandbox: removing pod sandbox from storage: a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.052658514Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.052677708Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=a4e2247a-8e25-41ac-b5f7-9a895f192a0c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.052946 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.052989 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.053013 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.053061 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.056072319Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.056094653Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=de09d069-83c2-4de5-93dc-32b9eb573e0b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.056213 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.056256 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.056279 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.056323 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.057441782Z" level=info msg="runSandbox: removing pod sandbox from storage: 5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.057482637Z" level=info msg="runSandbox: removing pod sandbox from storage: 509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.060708746Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.060728126Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=44702228-0bb6-4928-a617-d2d6e75385e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.060836 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.060867 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.060886 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.060923 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.061424066Z" level=info msg="runSandbox: removing pod sandbox from storage: 1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.063858059Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.063878078Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=1c149d74-c922-4a95-afb5-4ff101a770ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.064064 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.064100 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.064123 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.064162 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.066903698Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.066922269Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=a789c222-8879-444a-aedf-11daca13c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.067144 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.067177 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.067199 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:00.067247 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:00.110737 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:00.110871 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:00.110945 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:00.111036 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111054780Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111085108Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:00.111128 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111091091Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111191073Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111238685Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111262513Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111191176Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111360993Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111323881Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.111431508Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.142901747Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/45cb649d-89d7-4b91-baf0-9e375340c578 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.143104131Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.144393073Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/cc298806-0174-4f4e-826c-a0e3b5ee0573 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.144413597Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.146036612Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ac7c7cca-6046-45e0-8bd5-c6f75a13f587 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.146058361Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.146928618Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a5b83855-c2b8-4fd9-8041-1517d2f8cd04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.146950328Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.147832014Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d3867752-a6d4-4456-aebc-39e87141a6ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:00.147852769Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7e01112b\x2d987d\x2d4c32\x2dbd45\x2d54bdae7320b8.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0ae83ffc\x2ddb76\x2d40e0\x2da18c\x2d8e386911120a.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-972dc8b1\x2db173\x2d49b7\x2d9c63\x2d0d64c0d681ca.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1c8ba574798c9d447365c5351119a203a7a29f4dcfc1b0e280917e2b8dc4852e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-509c08772299ef0c0a86d32ef22d6dd107b4005406c5f1ef6dbc53bc5f515cc6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5284b435926f90ed4cc6039c94cf508cc5cc7eee25a5b1ad4ccd850ac112f406-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9ebf1e35\x2d6608\x2d4003\x2d919f\x2d56f5c0fa8606.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-faf3955b\x2d428c\x2d4892\x2d9c45\x2dbab6fa650ea1.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a5e629950142fc2c4532b5a8099a16a918c91f221c93f571b982dc97716a5ddb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-894a5df3f007536661abd559f06b41be2ead89093b3813bab0ff5d0198a1f044-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:03.996890 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:34:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:03.997420 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.032277700Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.032314787Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount has successfully entered the 'dead' state. Jan 23 17:34:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount has successfully entered the 'dead' state. Jan 23 17:34:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-42bc810b\x2dd04d\x2d4cc1\x2d97ba\x2d4338fcea7ee4.mount has successfully entered the 'dead' state. Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.067313255Z" level=info msg="runSandbox: deleting pod ID bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7 from idIndex" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.067339187Z" level=info msg="runSandbox: removing pod sandbox bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.067354659Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.067367901Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.079467577Z" level=info msg="runSandbox: removing pod sandbox from storage: bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.082727732Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:07.082747543Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=692fdf25-2419-4d54-afb3-da6e7235c801 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:07.082972 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:07.083017 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:07.083041 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:07.083087 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(bb6927fa146e9ec41153b40e6a495830b93337f6531f66029235b197a23589c7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.031590198Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.031627195Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount has successfully entered the 'dead' state. Jan 23 17:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount has successfully entered the 'dead' state. Jan 23 17:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-302227e0\x2df750\x2d4857\x2db378\x2d54e60d6849a7.mount has successfully entered the 'dead' state. Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.067307258Z" level=info msg="runSandbox: deleting pod ID 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f from idIndex" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.067330134Z" level=info msg="runSandbox: removing pod sandbox 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.067343088Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.067355284Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.083440815Z" level=info msg="runSandbox: removing pod sandbox from storage: 5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.087501257Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:09.087519743Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=66ba7fe8-d525-4ec1-83f7-1aaef4311589 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:09.087719 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:09.087763 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:09.087790 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:09.087857 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(5b8ddd89649701fba835ce0118844b1af45ef128ac4eb53d6410f076df57e50f): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.036936221Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.036969819Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.038350322Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.038388076Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount has successfully entered the 'dead' state. Jan 23 17:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount has successfully entered the 'dead' state. Jan 23 17:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount has successfully entered the 'dead' state. Jan 23 17:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount has successfully entered the 'dead' state. Jan 23 17:34:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-27dc097d\x2d1bf0\x2d4dd1\x2dbe76\x2d5ec0ecea1225.mount has successfully entered the 'dead' state. Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.076304498Z" level=info msg="runSandbox: deleting pod ID b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98 from idIndex" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.076329154Z" level=info msg="runSandbox: removing pod sandbox b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.076344572Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.076356742Z" level=info msg="runSandbox: unmounting shmPath for sandbox b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.080277184Z" level=info msg="runSandbox: deleting pod ID b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b from idIndex" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.080311407Z" level=info msg="runSandbox: removing pod sandbox b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.080327641Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.080341292Z" level=info msg="runSandbox: unmounting shmPath for sandbox b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.092475589Z" level=info msg="runSandbox: removing pod sandbox from storage: b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.092489702Z" level=info msg="runSandbox: removing pod sandbox from storage: b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.095923181Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.095941727Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=7a3ad7c8-d19d-46ab-bfcd-9c9fc275ea23 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.096132 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.096339 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.096361 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.096404 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.099250071Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:12.099271389Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=d7344e42-1329-4f1f-b282-9aba675f241f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.099506 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.099534 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.099553 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:34:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:12.099586 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-75f01d33\x2d2456\x2d4f04\x2da1bc\x2dc1459c0be051.mount has successfully entered the 'dead' state. Jan 23 17:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b95bc8cd1eeeae7d76dea681141d0e2a86cde8c2f84875dd48ad130c4180c72b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:13 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b82512b08dc5feee955f756870c9344a2d4f10832e2ebf27aa20b8d095405f98-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:17.997203 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:34:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:17.997710 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.030491897Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.030530830Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount has successfully entered the 'dead' state. Jan 23 17:34:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount has successfully entered the 'dead' state. Jan 23 17:34:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-edfddadf\x2da9e9\x2d4e75\x2da11a\x2dfd23a9bf491e.mount has successfully entered the 'dead' state. Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.065297880Z" level=info msg="runSandbox: deleting pod ID 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56 from idIndex" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.065326439Z" level=info msg="runSandbox: removing pod sandbox 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.065340321Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.065353216Z" level=info msg="runSandbox: unmounting shmPath for sandbox 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.077450471Z" level=info msg="runSandbox: removing pod sandbox from storage: 95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.080995172Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.081013896Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=982ab301-f19d-499f-bfd5-d11fc6f58588 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:20.081245 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:20.081283 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:20.081304 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:20.081345 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(95671b6304cf16cc61f3fc498ab130f45c2e5f098c436e9ff2f74bf2848ffd56): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:34:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:20.995959 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.996329948Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:20.996370331Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:21.012818477Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/1906044f-1519-46df-a70c-478db45e56a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:21.012850869Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.029431360Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.029463536Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount has successfully entered the 'dead' state. Jan 23 17:34:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount has successfully entered the 'dead' state. Jan 23 17:34:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-29b1c5da\x2d19b6\x2d4c1f\x2db489\x2d2501c7035744.mount has successfully entered the 'dead' state. Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.066313440Z" level=info msg="runSandbox: deleting pod ID f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b from idIndex" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.066337682Z" level=info msg="runSandbox: removing pod sandbox f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.066353156Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.066365179Z" level=info msg="runSandbox: unmounting shmPath for sandbox f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.080424688Z" level=info msg="runSandbox: removing pod sandbox from storage: f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.083387687Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.083406137Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=132f695b-b642-4393-8cbc-4615ec7b56c2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:22.083616 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:22.083659 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:22.083691 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:22.083739 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(f02cab64e4c163615598240849605960d01758993f24fdbd0a9378604596e63b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:34:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:22.995937 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.996299328Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:22.996336158Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.007281229Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/0833a60e-8d11-4091-bf41-85995579f43d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.007303232Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.035024783Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.035058222Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.035737893Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.035783441Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d2fc1dc\x2d5fdc\x2d4dee\x2dac3e\x2d455c24e06978.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2bb29091\x2da0e2\x2d4d62\x2da95b\x2d16e02603ee26.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082329689Z" level=info msg="runSandbox: deleting pod ID 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87 from idIndex" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082359112Z" level=info msg="runSandbox: removing pod sandbox 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082372942Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082384995Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082330512Z" level=info msg="runSandbox: deleting pod ID f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e from idIndex" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082445555Z" level=info msg="runSandbox: removing pod sandbox f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082462451Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.082476698Z" level=info msg="runSandbox: unmounting shmPath for sandbox f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.098423264Z" level=info msg="runSandbox: removing pod sandbox from storage: 6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.098431758Z" level=info msg="runSandbox: removing pod sandbox from storage: f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.101377890Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.101398076Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=63a005f9-c354-4e87-9af3-44323045a87c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.101673 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.101714 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.101739 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.101784 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6f3efd4ffe111230efd9e5a91a258450284c1bd5ee0db537fec9f0e9e3cade87): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.104487324Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:23.104504460Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=d74fefc3-add3-4822-a395-94b4e4ddf770 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.104692 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.104733 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.104758 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:34:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:23.104806 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f906ba2d05d85018f3ee2563922c3721a3af2ec7070e03b96741bc8e7885531e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.100576767Z" level=info msg="NetworkStart: stopping network for sandbox 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.100715186Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b4c35e7f-336b-4a84-8a8b-c143c4360bbf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.100737806Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.100744490Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.100750121Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:24.996460 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.996831397Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:24.996872006Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.008588954Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/431d70fe-8283-4f8f-be3c-cd8486b4a908 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.008619685Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.033339587Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.033373364Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount has successfully entered the 'dead' state. Jan 23 17:34:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount has successfully entered the 'dead' state. Jan 23 17:34:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b9f47646\x2db7c2\x2d4dd2\x2db09c\x2d3afc66b24abc.mount has successfully entered the 'dead' state. Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.070316270Z" level=info msg="runSandbox: deleting pod ID 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0 from idIndex" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.070340804Z" level=info msg="runSandbox: removing pod sandbox 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.070354283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.070368113Z" level=info msg="runSandbox: unmounting shmPath for sandbox 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.082442638Z" level=info msg="runSandbox: removing pod sandbox from storage: 914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.085234675Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:25.085252290Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=4635355d-bfac-479d-b0b5-0b9f1a0d51f2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:25.085485 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:25.085524 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:25.085548 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:25.085593 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(914100ae3623677c45e46f36c90e1ff66420257f152f97fece2c6b74677622e0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:34:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:26.995844 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:26.996165092Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:26.996196835Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:27.007401071Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/25af2d52-eeef-4c7b-aa57-d9c6893eb116 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:27.007428742Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904261 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904280 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904288 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904294 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904300 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904309 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:27.904324 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:34:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:28.142549896Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:34:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:29.996391 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:34:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:29.996889 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:34:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:31.996212 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:34:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:31.996571888Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:31.996843495Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:32.008812036Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/55731a3d-a81c-4c35-8cdc-ff75489d7724 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:32.008835278Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:33.995797 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:34:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:33.995952 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:34:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:33.996106410Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:33.996145067Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:33.996234856Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:33.996268293Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:34.011227386Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b713219-1bcf-4694-83aa-751555edb133 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:34.011248604Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:34.011785542Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/3ebfc818-1265-4d17-98c4-50fb2518e52b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:34.011808462Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:35.118575136Z" level=info msg="NetworkStart: stopping network for sandbox 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:35.118708770Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/464cc3b2-f67d-4454-b358-7f23d46a2a42 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:35.118730072Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:35.118737060Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:35.118743367Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:36.995964 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:34:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:36.996316798Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:36.996350546Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.006847296Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/0043c5cd-b557-45e9-8717-8f42a0b48f06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.006866181Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.032703964Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.032736947Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount has successfully entered the 'dead' state. Jan 23 17:34:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount has successfully entered the 'dead' state. Jan 23 17:34:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-53cebc50\x2dadb8\x2d4a1c\x2dbcd3\x2de3d15103a1ce.mount has successfully entered the 'dead' state. Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.075318281Z" level=info msg="runSandbox: deleting pod ID 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd from idIndex" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.075340703Z" level=info msg="runSandbox: removing pod sandbox 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.075354767Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.075367966Z" level=info msg="runSandbox: unmounting shmPath for sandbox 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.087440464Z" level=info msg="runSandbox: removing pod sandbox from storage: 58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.090415932Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.090433754Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=1c72d6fc-84e7-460c-ba06-034c163f380b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:37.090625 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:37.090681 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:37.090705 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:37.090749 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:34:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:37.996823 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.997151778Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:37.997184156Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-58e498c74634e94e865df157319bc2e5052c10b00cfb3321f04bf091710a16cd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:38.008023223Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e3a481e9-c108-495a-b697-c340ff60fa8f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:38.008041569Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.031166413Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.031202699Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount has successfully entered the 'dead' state. Jan 23 17:34:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount has successfully entered the 'dead' state. Jan 23 17:34:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1e4b7f3b\x2dfa5c\x2d4e30\x2d8cf8\x2de52e0b9c1f4a.mount has successfully entered the 'dead' state. Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.072304451Z" level=info msg="runSandbox: deleting pod ID 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8 from idIndex" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.072330459Z" level=info msg="runSandbox: removing pod sandbox 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.072343225Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.072355081Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.088431516Z" level=info msg="runSandbox: removing pod sandbox from storage: 46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.091429427Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:40.091448698Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=e2fafd22-2d7f-4ab7-933b-63192697dea0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:40.091628 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:40.091677 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:40.091701 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:40.091750 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(46e33d4c7188f252901a750828436e80f574aea47ae19a5803e034607d041dd8): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:40.997161 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:34:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:40.997665 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.155833749Z" level=info msg="NetworkStart: stopping network for sandbox 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.155984410Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/45cb649d-89d7-4b91-baf0-9e375340c578 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.156006609Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.156013219Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.156020661Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.157039650Z" level=info msg="NetworkStart: stopping network for sandbox dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.157139552Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/cc298806-0174-4f4e-826c-a0e3b5ee0573 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.157158711Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.157165390Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.157171289Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158647549Z" level=info msg="NetworkStart: stopping network for sandbox decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158799280Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/ac7c7cca-6046-45e0-8bd5-c6f75a13f587 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158825818Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158836855Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158844574Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.158825357Z" level=info msg="NetworkStart: stopping network for sandbox 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.159026366Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d3867752-a6d4-4456-aebc-39e87141a6ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.159049780Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.159056414Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.159062232Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.160654672Z" level=info msg="NetworkStart: stopping network for sandbox 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.160793277Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a5b83855-c2b8-4fd9-8041-1517d2f8cd04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.160821725Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.160832715Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:34:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:45.160842036Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:47.996924 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:34:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:47.997470584Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:47.997510680Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:48.012238031Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/5f7df367-2a26-4b8b-8843-9c043f31f468 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:48.012261442Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:54.996299 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:34:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:54.996671439Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:34:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:54.996711188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:34:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:55.007469771Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/8d848589-2d8c-4f98-8fc2-a0d99ba9d705 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:34:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:55.007490388Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:34:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:34:55.996618 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:34:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:34:55.997151 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:34:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:34:58.143003317Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:06.026394783Z" level=info msg="NetworkStart: stopping network for sandbox 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:06.026558212Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/1906044f-1519-46df-a70c-478db45e56a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:06.026585427Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:06.026592416Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:06.026599657Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:07.997401 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:35:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:07.997893 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:08.021350364Z" level=info msg="NetworkStart: stopping network for sandbox 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:08.021501945Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/0833a60e-8d11-4091-bf41-85995579f43d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:08.021525583Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:08.021531980Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:08.021540432Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.112346696Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.112382469Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount has successfully entered the 'dead' state. Jan 23 17:35:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount has successfully entered the 'dead' state. Jan 23 17:35:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b4c35e7f\x2d336b\x2d4a84\x2d8a8b\x2dc143c4360bbf.mount has successfully entered the 'dead' state. Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.147305589Z" level=info msg="runSandbox: deleting pod ID 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c from idIndex" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.147329735Z" level=info msg="runSandbox: removing pod sandbox 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.147342852Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.147354475Z" level=info msg="runSandbox: unmounting shmPath for sandbox 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.160465603Z" level=info msg="runSandbox: removing pod sandbox from storage: 819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.163679778Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.163699756Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=15cfcc03-c89d-49e4-8c42-83123f6934b8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:09.163825 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:09.163981 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:09.164005 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:09.164056 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(819137af548c8c45a100827c2aa828c664d4cd2f9014d2ba784dd8265f40220c): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:35:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:09.231898 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.232190569Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.232231810Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.242906353Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/6e3a8cfc-d2bb-4db4-98d6-304730c3dfee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:09.242925346Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:10.021118255Z" level=info msg="NetworkStart: stopping network for sandbox 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:10.021276933Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/431d70fe-8283-4f8f-be3c-cd8486b4a908 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:10.021303706Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:10.021311189Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:10.021318204Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:12.020220731Z" level=info msg="NetworkStart: stopping network for sandbox edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:12.020384887Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/25af2d52-eeef-4c7b-aa57-d9c6893eb116 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:12.020408684Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:12.020417842Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:12.020425176Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:17.020956373Z" level=info msg="NetworkStart: stopping network for sandbox ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:17.021102938Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/55731a3d-a81c-4c35-8cdc-ff75489d7724 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:17.021127189Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:17.021134194Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:17.021140563Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023523874Z" level=info msg="NetworkStart: stopping network for sandbox 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023658412Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b713219-1bcf-4694-83aa-751555edb133 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023680844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023689120Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023695194Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.023941524Z" level=info msg="NetworkStart: stopping network for sandbox 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.024058972Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/3ebfc818-1265-4d17-98c4-50fb2518e52b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.024080545Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.024087684Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:19.024094630Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:19.996824 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:35:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:19.997346 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.130507207Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.130543796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount has successfully entered the 'dead' state. Jan 23 17:35:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount has successfully entered the 'dead' state. Jan 23 17:35:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-464cc3b2\x2df67d\x2d4454\x2db358\x2d7f23d46a2a42.mount has successfully entered the 'dead' state. Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.178313430Z" level=info msg="runSandbox: deleting pod ID 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c from idIndex" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.178339297Z" level=info msg="runSandbox: removing pod sandbox 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.178352552Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.178364031Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.202444693Z" level=info msg="runSandbox: removing pod sandbox from storage: 8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.205672486Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.205692026Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=1bebfc3d-087c-4d87-a215-a1cec33b400a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:20.205840 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:20.205880 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:20.205902 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:20.205947 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(8c6723aeb2cf29182536de9fe36b2acb1cfb8c019ab3e83f68565cc3530f9d2c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:35:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:20.251646 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.251857363Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.251887623Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.265496362Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/f33d3f42-3a4c-4746-a8c2-677d4e732ae9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:20.265518725Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:22.021257848Z" level=info msg="NetworkStart: stopping network for sandbox 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:22.021408447Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/0043c5cd-b557-45e9-8717-8f42a0b48f06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:22.021431695Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:22.021439274Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:22.021445694Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:23.021891362Z" level=info msg="NetworkStart: stopping network for sandbox b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:23.022023345Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/e3a481e9-c108-495a-b697-c340ff60fa8f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:23.022045953Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:23.022051926Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:23.022058876Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905223 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905244 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905251 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905257 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905263 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905270 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:27.905276 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:35:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:27.907800316Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=38fd8bbc-1ac3-467e-b9cc-45abc28cf190 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:35:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:27.907918598Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=38fd8bbc-1ac3-467e-b9cc-45abc28cf190 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:35:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:28.141786455Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.167901524Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.167947360Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.167914143Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.168017164Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.170331407Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.170364388Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.170420285Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.170446676Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.171296625Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.171327853Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount has successfully entered the 'dead' state. Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214331935Z" level=info msg="runSandbox: deleting pod ID dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d from idIndex" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214365822Z" level=info msg="runSandbox: removing pod sandbox dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214383011Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214395823Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214332777Z" level=info msg="runSandbox: deleting pod ID 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb from idIndex" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214457319Z" level=info msg="runSandbox: removing pod sandbox 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214471607Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.214484836Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.215275312Z" level=info msg="runSandbox: deleting pod ID decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee from idIndex" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.215299248Z" level=info msg="runSandbox: removing pod sandbox decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.215312088Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.215327015Z" level=info msg="runSandbox: unmounting shmPath for sandbox decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222309437Z" level=info msg="runSandbox: deleting pod ID 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4 from idIndex" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222337887Z" level=info msg="runSandbox: removing pod sandbox 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222351668Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222365355Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222312724Z" level=info msg="runSandbox: deleting pod ID 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31 from idIndex" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222430749Z" level=info msg="runSandbox: removing pod sandbox 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222444318Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.222458131Z" level=info msg="runSandbox: unmounting shmPath for sandbox 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.234445154Z" level=info msg="runSandbox: removing pod sandbox from storage: 4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.234482227Z" level=info msg="runSandbox: removing pod sandbox from storage: decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.234535552Z" level=info msg="runSandbox: removing pod sandbox from storage: dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.237323403Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.237343029Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=7d321da9-b8a3-48f6-998a-bd168ca942d2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.237565 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.237607 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.237630 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.237676 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.240846518Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.240868889Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=91537689-1d70-436c-b009-1ae5722db7d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.241081 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.241116 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.241141 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.241182 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.243523472Z" level=info msg="runSandbox: removing pod sandbox from storage: 2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.244200036Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.244231979Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f75ffa4a-d26a-416b-855b-2e04550d65e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.244445359Z" level=info msg="runSandbox: removing pod sandbox from storage: 644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.244452 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.244484 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.244504 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.244544 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.247415059Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.247433547Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=543584e0-c0e7-45c9-9a70-e474e26f1548 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.247633 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.247670 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.247693 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.247741 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.250525084Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.250542472Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=2b7bd357-9012-477d-94df-b8b737e9e184 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.250734 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.250765 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.250787 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.250821 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.266996 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.267058 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.267128 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.267326 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.267380 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267332200Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267362372Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267423988Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267460843Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267560893Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267578845Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267592440Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267616139Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267678195Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.267704611Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.289386703Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/8ad48ba6-4fd3-44fe-8823-01c4f2d2210c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.289416056Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.295928005Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/bf0ee918-03ef-4981-874e-5d4ba3a171c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.295949209Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.296738012Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d3051070-af47-4b64-94a5-65bb87c734a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.296758791Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.300437514Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cbc235b8-a841-4b32-a209-a277c0a5bbc5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.300458583Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.301721524Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/12dd1ee5-4f8a-4f7d-8f77-5dc18002b984 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:30.301742263Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:30.997042 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:35:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:30.997553 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d3867752\x2da6d4\x2d4456\x2daebc\x2d39e87141a6ae.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a5b83855\x2dc2b8\x2d4fd9\x2d8041\x2d1517d2f8cd04.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ac7c7cca\x2d6046\x2d45e0\x2d8bd5\x2dc6f75a13f587.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cc298806\x2d0174\x2d4f4e\x2d826c\x2da0e3b5ee0573.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-45cb649d\x2d89d7\x2d4b91\x2dbaf0\x2d9e375340c578.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dd2d838ab2c759cea43947d357d1be5e096d25d17f07db504803922dbf6e7c2d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-644fdda2865b342e273259c31170641b2749dbee61ccd6e286ddddbfbc5a1d31-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2eba42ea6b8e9754a18027a674e0e5e573664c141ef8653f7ed23290cf00bce4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-decf03ea835a9d36b3ac33408c37669e0179738eea9e8d2f3c869c8e7ccd99ee-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4dec955752511db1ddad639cbb4d49a1c1ef381c3a92052110b7b436eab79bcb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:33.024016964Z" level=info msg="NetworkStart: stopping network for sandbox 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:33.024260648Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/5f7df367-2a26-4b8b-8843-9c043f31f468 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:33.024287614Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:33.024294677Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:33.024301262Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:40.021327328Z" level=info msg="NetworkStart: stopping network for sandbox 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:40.021464226Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/8d848589-2d8c-4f98-8fc2-a0d99ba9d705 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:40.021486628Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:40.021493170Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:40.021499078Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:41.996802 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:35:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:41.997313 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.037956976Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.037996231Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount has successfully entered the 'dead' state. Jan 23 17:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount has successfully entered the 'dead' state. Jan 23 17:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1906044f\x2d1519\x2d46df\x2da70c\x2d478db45e56a8.mount has successfully entered the 'dead' state. Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.091304133Z" level=info msg="runSandbox: deleting pod ID 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52 from idIndex" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.091329057Z" level=info msg="runSandbox: removing pod sandbox 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.091343572Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.091355130Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.107470866Z" level=info msg="runSandbox: removing pod sandbox from storage: 5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.110650850Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:51.110670379Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=0c57d32e-e0a7-41d5-b212-d8a26140ac1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:51.111032 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:51.111185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:51.111217 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:35:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:51.111271 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:35:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5949204eb323df66b9ac4f94d079c048540c34a72f8f7b5aae8b04b6994dbb52-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.031801727Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.031842622Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount has successfully entered the 'dead' state. Jan 23 17:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount has successfully entered the 'dead' state. Jan 23 17:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0833a60e\x2d8d11\x2d4091\x2dbf41\x2d85995579f43d.mount has successfully entered the 'dead' state. Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.083283359Z" level=info msg="runSandbox: deleting pod ID 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554 from idIndex" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.083309639Z" level=info msg="runSandbox: removing pod sandbox 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.083323625Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.083335479Z" level=info msg="runSandbox: unmounting shmPath for sandbox 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.099415806Z" level=info msg="runSandbox: removing pod sandbox from storage: 34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.102983907Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:53.103001867Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=17f80254-75e9-4f6e-ae36-acd767d09092 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:53.103231 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:53.103282 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:53.103309 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:35:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:53.103365 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(34c652e6ec8baf564b79e4f1421e6bb22b4b3136c670b6fb46e133e8ee14e554): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:35:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:54.257910813Z" level=info msg="NetworkStart: stopping network for sandbox 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:54.258048457Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/6e3a8cfc-d2bb-4db4-98d6-304730c3dfee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:35:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:54.258070212Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:35:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:54.258076623Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:35:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:54.258082812Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:35:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:35:54.997003 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:35:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:54.997529 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.033149764Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.033200936Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount has successfully entered the 'dead' state. Jan 23 17:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount has successfully entered the 'dead' state. Jan 23 17:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-431d70fe\x2d8283\x2d4f8f\x2dbe3c\x2dcd8486b4a908.mount has successfully entered the 'dead' state. Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.085275815Z" level=info msg="runSandbox: deleting pod ID 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987 from idIndex" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.085303469Z" level=info msg="runSandbox: removing pod sandbox 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.085324605Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.085344229Z" level=info msg="runSandbox: unmounting shmPath for sandbox 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.098423550Z" level=info msg="runSandbox: removing pod sandbox from storage: 271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.102011318Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:55.102029724Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a473e83b-d153-4987-b24a-219fd8cf8917 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:55.102180 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:55.102225 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:55.102248 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:35:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:55.102285 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(271f847ebfef36fecb1cd83bf150ee98136020f740050702f12eef4fff0d9987): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.031497884Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.031542711Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount has successfully entered the 'dead' state. Jan 23 17:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount has successfully entered the 'dead' state. Jan 23 17:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-25af2d52\x2deeef\x2d4c7b\x2daa57\x2dd9c6893eb116.mount has successfully entered the 'dead' state. Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.070305160Z" level=info msg="runSandbox: deleting pod ID edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a from idIndex" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.070332333Z" level=info msg="runSandbox: removing pod sandbox edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.070347518Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.070363757Z" level=info msg="runSandbox: unmounting shmPath for sandbox edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.090466092Z" level=info msg="runSandbox: removing pod sandbox from storage: edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.093978809Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:57.093997777Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=9ad9cc57-e4e3-451a-8bb8-39f55a156173 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:57.094158 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:57.094217 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:57.094242 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:35:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:35:57.094288 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(edace855b534b06207f64902552a9af1afac6b6027b4335e352362bb9b91d63a): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:35:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:35:58.141492595Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.031425611Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.031676861Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount has successfully entered the 'dead' state. Jan 23 17:36:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount has successfully entered the 'dead' state. Jan 23 17:36:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-55731a3d\x2da81c\x2d4c35\x2d8cdc\x2dff75489d7724.mount has successfully entered the 'dead' state. Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.075275930Z" level=info msg="runSandbox: deleting pod ID ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78 from idIndex" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.075303134Z" level=info msg="runSandbox: removing pod sandbox ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.075316961Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.075328692Z" level=info msg="runSandbox: unmounting shmPath for sandbox ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.095426031Z" level=info msg="runSandbox: removing pod sandbox from storage: ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.098847305Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:02.098868716Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=da91d855-61e0-46d0-a9cc-8cfc1708341e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:02.099090 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:02.099142 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:36:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:02.099166 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:36:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:02.099218 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ea12dc50a7da831bd73d63f7b4b8ae2a503151251270dfe5636fd0cc8c5aed78): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.034149779Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.034185718Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.034871697Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.034900763Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.081296869Z" level=info msg="runSandbox: deleting pod ID 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6 from idIndex" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.081324605Z" level=info msg="runSandbox: removing pod sandbox 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.081336994Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.081348970Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.089299242Z" level=info msg="runSandbox: deleting pod ID 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c from idIndex" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.091830886Z" level=info msg="runSandbox: removing pod sandbox 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.091936816Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.093470133Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.105431461Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.108854179Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.108871798Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=897607ca-6b50-4b5a-ba77-2d500a800d56 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.109088 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.109135 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.109157 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.109203 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.110418962Z" level=info msg="runSandbox: removing pod sandbox from storage: 0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.113580645Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.113597682Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=7dbe9666-d515-4c1a-b627-7d202245adbe name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.113817 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.113857 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.113879 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:04.113931 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ebfc818\x2d1265\x2d4d17\x2d98c4\x2d50fb2518e52b.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7b713219\x2d1bcf\x2d4694\x2d83aa\x2d751555edb133.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3d3a99aa1d44d189578a7a52f56355208d2cd26bde4b606e9e3bdfee3d5eb7e6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0db76f0a4030df71b8c3cbd4c99d2844c3dbb8bb1e89352d8bf33e8af604a14c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:04.996437 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.996719247Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:04.996752604Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.007605130Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/320bbbf2-0cfe-450e-a246-0134df47e99e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.007623713Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.279636455Z" level=info msg="NetworkStart: stopping network for sandbox 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.279811258Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/f33d3f42-3a4c-4746-a8c2-677d4e732ae9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.279833232Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.279839510Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.279845464Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:05.996013 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.996382376Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:05.996415924Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:06.007064600Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/167c30f5-0fe5-4276-9684-1f8f0d03cd46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:06.007083832Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.032347355Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.032379246Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount has successfully entered the 'dead' state. Jan 23 17:36:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount has successfully entered the 'dead' state. Jan 23 17:36:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0043c5cd\x2db557\x2d45e9\x2d8717\x2d8f42a0b48f06.mount has successfully entered the 'dead' state. Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.089306606Z" level=info msg="runSandbox: deleting pod ID 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693 from idIndex" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.089330594Z" level=info msg="runSandbox: removing pod sandbox 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.089343570Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.089357717Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.113428664Z" level=info msg="runSandbox: removing pod sandbox from storage: 1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.116325905Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.116344628Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=1dad19b2-b3ca-49f6-a80f-52dfa31a5be4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:07.116550 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:07.116601 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:36:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:07.116626 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:36:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:07.116674 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(1c852a03c6fa13c5f87667f94653f67eae34d679bff7cbdc9b4427b48cdc6693): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:36:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:07.996432 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.996910319Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:07.996947952Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.008749648Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/efc972de-bf27-481a-a1bb-34aeee53a60a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.008769077Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.032256230Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.032307991Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount has successfully entered the 'dead' state. Jan 23 17:36:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount has successfully entered the 'dead' state. Jan 23 17:36:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e3a481e9\x2dc108\x2d495a\x2db697\x2dc340ff60fa8f.mount has successfully entered the 'dead' state. Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.078316975Z" level=info msg="runSandbox: deleting pod ID b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885 from idIndex" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.078347199Z" level=info msg="runSandbox: removing pod sandbox b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.078366168Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.078398862Z" level=info msg="runSandbox: unmounting shmPath for sandbox b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.094460076Z" level=info msg="runSandbox: removing pod sandbox from storage: b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.097677556Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:08.097698928Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=2369e9ad-95e1-4c8b-b7ac-78352bb31712 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:08.097842 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:08.097885 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:08.097908 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:08.097955 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b010a7cba95a30baa163712b26ec62b9afe3af7902b40d96dfec23fb8ae45885): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:08.997010 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:36:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:08.997525 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:36:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:09.995973 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:09.996314435Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:09.996377449Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:10.009712568Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/56ce5ad2-4c65-498c-83d9-59a4740846d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:10.009733698Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:14.996121 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:36:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:14.996533337Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:14.996575396Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.007305447Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/59b0bf4f-7af4-49bd-8062-38f7e3544601 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.007325431Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.302558828Z" level=info msg="NetworkStart: stopping network for sandbox a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.302693767Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/8ad48ba6-4fd3-44fe-8823-01c4f2d2210c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.302717051Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.302723542Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.302729190Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.308563318Z" level=info msg="NetworkStart: stopping network for sandbox 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.308663778Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/bf0ee918-03ef-4981-874e-5d4ba3a171c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.308684645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.308691990Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.308697603Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.310602798Z" level=info msg="NetworkStart: stopping network for sandbox 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.310761639Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/d3051070-af47-4b64-94a5-65bb87c734a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.310789426Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.310797619Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.310805018Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.311959263Z" level=info msg="NetworkStart: stopping network for sandbox 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.312097832Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cbc235b8-a841-4b32-a209-a277c0a5bbc5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.312123326Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.312132545Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.312138233Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.314929818Z" level=info msg="NetworkStart: stopping network for sandbox bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.315068180Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/12dd1ee5-4f8a-4f7d-8f77-5dc18002b984 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.315090001Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.315097068Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:15.315106737Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:15 hub-master-0.workload.bos2.lab conmon[141918]: conmon 3ef12765fbf132233883 : container 141930 exited with status 1 Jan 23 17:36:15 hub-master-0.workload.bos2.lab systemd[1]: crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has successfully entered the 'dead' state. Jan 23 17:36:15 hub-master-0.workload.bos2.lab systemd[1]: crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope: Consumed 3.753s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope completed and consumed the indicated resources. Jan 23 17:36:15 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope has successfully entered the 'dead' state. Jan 23 17:36:15 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope: Consumed 55ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4.scope completed and consumed the indicated resources. Jan 23 17:36:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:16.349159 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4" exitCode=1 Jan 23 17:36:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:16.349187 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4} Jan 23 17:36:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:16.349218 8631 scope.go:115] "RemoveContainer" containerID="628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464" Jan 23 17:36:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:16.349442 8631 scope.go:115] "RemoveContainer" containerID="3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.349739087Z" level=info msg="Removing container: 628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464" id=76f5a4fb-8310-4d33-bf78-107500f1d14f name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.349828059Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=14f004b2-2547-41ee-861d-bfc6bf5fc7f7 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.353107155Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=14f004b2-2547-41ee-861d-bfc6bf5fc7f7 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.354355309Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=efb3a027-7850-4964-988a-4a4939d1ad27 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.354524087Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=efb3a027-7850-4964-988a-4a4939d1ad27 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.355117965Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=d93bbdce-3b9e-48ac-aec4-65a794e24671 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.355203002Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:16 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-a7cdc16e7257f27482fd34a87e7942810438f60a5830a9ea5ac1c1d1c5790e7a-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-a7cdc16e7257f27482fd34a87e7942810438f60a5830a9ea5ac1c1d1c5790e7a-merged.mount has successfully entered the 'dead' state. Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.399429888Z" level=info msg="Removed container 628d5fe7ddffe06b8c9772411ddcf6341170bca082cbd907bdc50d2e0c7ee464: openshift-multus/multus-cdt6c/kube-multus" id=76f5a4fb-8310-4d33-bf78-107500f1d14f name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:36:16 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope. -- Subject: Unit crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:36:16 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f. -- Subject: Unit crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.495679215Z" level=info msg="Created container 9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f: openshift-multus/multus-cdt6c/kube-multus" id=d93bbdce-3b9e-48ac-aec4-65a794e24671 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.496070839Z" level=info msg="Starting container: 9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f" id=643e5e90-b112-4147-bd15-b9398c7548d0 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.502563851Z" level=info msg="Started container" PID=160216 containerID=9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f description=openshift-multus/multus-cdt6c/kube-multus id=643e5e90-b112-4147-bd15-b9398c7548d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.507237278Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_1c40dc76-7a88-4104-9246-23766b5955d7\"" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.516809996Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.516830322Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.529382839Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.539763879Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.539783014Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:36:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:16.539792367Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_1c40dc76-7a88-4104-9246-23766b5955d7\"" Jan 23 17:36:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:17.352492 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f} Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.035366520Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.035402552Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount has successfully entered the 'dead' state. Jan 23 17:36:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount has successfully entered the 'dead' state. Jan 23 17:36:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f7df367\x2d2a26\x2d4b8b\x2d8843\x2d9c043f31f468.mount has successfully entered the 'dead' state. Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.078300162Z" level=info msg="runSandbox: deleting pod ID 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5 from idIndex" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.078325953Z" level=info msg="runSandbox: removing pod sandbox 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.078344324Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.078356244Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.094412209Z" level=info msg="runSandbox: removing pod sandbox from storage: 3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.098049158Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.098065944Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=15396aee-91a3-4e11-aa62-76dbc88a9f7e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:18.098258 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:18.098314 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:18.098351 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:18.098420 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(3c73c4bb9b15ad9269a5d3c4cdd4f155c53468397960b519977d62b36d5f6da5): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:18.996457 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:36:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:18.996608 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.996785973Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.996822085Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.996943113Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:18.996978236Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:19.015881322Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b0798ee-398f-46bf-9e56-5045bfdb9044 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:19.015906490Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:19.016660423Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/325d982a-6329-4727-901b-88774912f7e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:19.016680616Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:20.995450 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:36:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:20.995774 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:20.995982716Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:20.996037009Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:20.996067272Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:20.996107197Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:21.010847251Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/eb0d3d16-b69d-49b0-a113-fc52e093b8c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:21.010873464Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:21.012512295Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/d6222470-3cdd-4d16-88b2-573cb7192c8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:21.012531474Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:21.997016 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:36:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:21.997552 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.031892802Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.031929261Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount has successfully entered the 'dead' state. Jan 23 17:36:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount has successfully entered the 'dead' state. Jan 23 17:36:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8d848589\x2d2d8c\x2d4f98\x2d8fc2\x2da0d99ba9d705.mount has successfully entered the 'dead' state. Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.074308660Z" level=info msg="runSandbox: deleting pod ID 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb from idIndex" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.074334336Z" level=info msg="runSandbox: removing pod sandbox 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.074347217Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.074358803Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.094459671Z" level=info msg="runSandbox: removing pod sandbox from storage: 87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.097482857Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:25.097501433Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=94d45f34-a2ad-4477-9d29-7260bfed5834 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:25.097624 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:25.097666 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:36:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:25.097689 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:36:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:25.097735 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(87c8b347e37af7612089074cc0ebc407b466ac8fe6ba97712983dd4b741d6dfb): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905720 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905740 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905747 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905754 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905761 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905769 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:27.905775 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:36:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:28.143041296Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:36:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:28.996318 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:36:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:28.996637966Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:28.996677205Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:29.008448438Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/59be6d5f-624c-4b58-b6fd-dac7a4552cc2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:29.008467950Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:36.996787 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:36:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:36.997306 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1318] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1324] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1324] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1326] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1331] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:36:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495398.1336] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:36:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:38.995898 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:36:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:38.996365561Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:38.996561809Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.010513640Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/4067b380-9e9c-487c-8e6a-2b25dc5c6a7e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.010541680Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.268839120Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.268875662Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount has successfully entered the 'dead' state. Jan 23 17:36:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount has successfully entered the 'dead' state. Jan 23 17:36:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6e3a8cfc\x2dd2bb\x2d4db4\x2d98d6\x2d304730c3dfee.mount has successfully entered the 'dead' state. Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.319314097Z" level=info msg="runSandbox: deleting pod ID 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f from idIndex" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.319336996Z" level=info msg="runSandbox: removing pod sandbox 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.319349876Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.319361859Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.339462583Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495399.3395] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.342660201Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.342680643Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=c279e0c5-9286-4e60-9e35-55025da35194 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:39.342946 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:39.342997 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:39.343022 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:39.343068 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:36:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:39.395269 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.395594656Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.395630015Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.405691574Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/43729c10-86e8-4ac5-90dc-1376b31de9cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:39.405710179Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2a89662f25016af69e18d6ace54b761456734f5e68275b01c89d81e9a4fb109f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:47.997073 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:36:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:47.997582 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.019831619Z" level=info msg="NetworkStart: stopping network for sandbox 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.019971132Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/320bbbf2-0cfe-450e-a246-0134df47e99e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.019993482Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.019999586Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.020006720Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.290087648Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.290130026Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount has successfully entered the 'dead' state. Jan 23 17:36:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount has successfully entered the 'dead' state. Jan 23 17:36:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f33d3f42\x2d3a4c\x2d4746\x2da8c2\x2d677d4e732ae9.mount has successfully entered the 'dead' state. Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.340290918Z" level=info msg="runSandbox: deleting pod ID 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957 from idIndex" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.340535036Z" level=info msg="runSandbox: removing pod sandbox 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.340548973Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.340562294Z" level=info msg="runSandbox: unmounting shmPath for sandbox 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.360479438Z" level=info msg="runSandbox: removing pod sandbox from storage: 962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.363766362Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.363787986Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=7c6a3fa9-c696-4d5b-9939-bf168379d3a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:50.364028 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:36:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:50.364075 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:50.364097 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:50.364143 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(962cb5c6e3ccba66d5857e61a2fe23fb77eb5c7e0cf519ae11ec445946388957): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:36:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:50.414969 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.415271688Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.415306098Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.425849227Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/0ff52af8-5913-448b-81a4-70a1ad356330 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:50.425868044Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:51.019744519Z" level=info msg="NetworkStart: stopping network for sandbox 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:51.019935840Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/167c30f5-0fe5-4276-9684-1f8f0d03cd46 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:51.019958530Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:51.019964965Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:51.019971612Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:53.021670245Z" level=info msg="NetworkStart: stopping network for sandbox c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:53.021811290Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/efc972de-bf27-481a-a1bb-34aeee53a60a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:53.021834487Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:53.021841237Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:53.021847504Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:55.023559462Z" level=info msg="NetworkStart: stopping network for sandbox 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:55.023709133Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/56ce5ad2-4c65-498c-83d9-59a4740846d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:55.023731270Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:55.023738935Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:36:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:55.023745222Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:36:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:36:58.143345096Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:36:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:36:58.996460 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:36:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:36:58.996949 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.019883312Z" level=info msg="NetworkStart: stopping network for sandbox 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.020024168Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/59b0bf4f-7af4-49bd-8062-38f7e3544601 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.020047385Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.020054164Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.020060450Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.314671863Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.314707236Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.318571628Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.318602055Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.321176480Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.321220850Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.322448791Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.322479933Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.328099749Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.328126516Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount has successfully entered the 'dead' state. Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.360309877Z" level=info msg="runSandbox: deleting pod ID a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0 from idIndex" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.360336096Z" level=info msg="runSandbox: removing pod sandbox a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.360352580Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.360366902Z" level=info msg="runSandbox: unmounting shmPath for sandbox a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.363362589Z" level=info msg="runSandbox: deleting pod ID 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79 from idIndex" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.363393128Z" level=info msg="runSandbox: removing pod sandbox 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.363410531Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.363426180Z" level=info msg="runSandbox: unmounting shmPath for sandbox 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.365318474Z" level=info msg="runSandbox: deleting pod ID 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb from idIndex" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.365342633Z" level=info msg="runSandbox: removing pod sandbox 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.365355573Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.365371487Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368375591Z" level=info msg="runSandbox: deleting pod ID 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784 from idIndex" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368412045Z" level=info msg="runSandbox: removing pod sandbox 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368426842Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368440251Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368375825Z" level=info msg="runSandbox: deleting pod ID bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec from idIndex" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368506451Z" level=info msg="runSandbox: removing pod sandbox bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368519338Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.368532305Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.377452554Z" level=info msg="runSandbox: removing pod sandbox from storage: a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.380760120Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.380778622Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=90020ba0-15cd-4621-86bb-5464162f9349 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.381003 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.381192 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.381221 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.381273 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.384453982Z" level=info msg="runSandbox: removing pod sandbox from storage: 2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.384464553Z" level=info msg="runSandbox: removing pod sandbox from storage: 39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.387688866Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.387708808Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=749c3bbf-09c6-4ecd-82a0-eb8175a7cd03 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.387899 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.387936 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.387958 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.388000 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.392498512Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.392519011Z" level=info msg="runSandbox: removing pod sandbox from storage: bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.393794784Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.393816381Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=9398f6d8-c42c-4e45-8736-05e4aeb00a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.394068 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.394108 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.394129 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.394170 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.396741944Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.396762767Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ff4ce851-67da-4fcb-8703-8d11ac9847d8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.396975 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.397008 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.397029 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.397068 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.399590121Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.399606783Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=09aa035c-b85e-4f7a-ba9b-27bb5b2f6658 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.399778 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.399812 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.399834 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:00.399870 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:00.435813 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:00.436008 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:00.436112 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436153465Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436183259Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436239014Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436273312Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436310307Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436336752Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436508193Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:00.436305 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:37:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:00.436315 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436535113Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436559892Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.436576683Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.454648599Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/e6d31fb4-972f-426f-b817-59962e3532e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.454670497Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.455353501Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/4c8e77d4-e032-4b98-8388-d899f0ece0b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.455374258Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.465693538Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/3a90076a-d858-4223-bb57-776c31ae0fe1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.465714382Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.466015084Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/77c2dd0f-9039-4e86-bbe1-2112ad27650f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.466036842Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.467489637Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5606b344-e3a5-43ba-9460-91461142f80d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:00.467508721Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-12dd1ee5\x2d4f8a\x2d4f7d\x2d8f77\x2d5dc18002b984.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cbc235b8\x2da841\x2d4b32\x2da209\x2da277c0a5bbc5.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d3051070\x2daf47\x2d4b64\x2d94a5\x2d65bb87c734a6.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bb85d816334b7dc9c9e8a8c85ef598af4fa78c3235ddfecfc1f76b2e92c6fdec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bf0ee918\x2d03ef\x2d4981\x2d874e\x2d5d4ba3a171c2.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2b631041dea6649fd2007b267a4fdf74c446b0cae9babe4caf393c54b90dd784-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8ad48ba6\x2d4fd3\x2d44fe\x2d8823\x2d01c4f2d2210c.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-39fa2d6c9cd9da88aca321ee78905120b10dd2d1c6b1d121a037ba920939fc79-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2215470ab0fc53a4b1dfb3a1e8b49297302e3c934793cdf5cabbcbe7a01940cb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a2cbc6e6b0868c211784537b7011970c7c92ac1603dc0cd5b3e71249ba86d8d0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029635783Z" level=info msg="NetworkStart: stopping network for sandbox 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029637525Z" level=info msg="NetworkStart: stopping network for sandbox 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029794508Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/325d982a-6329-4727-901b-88774912f7e8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029818993Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029825690Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029833203Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029825152Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7b0798ee-398f-46bf-9e56-5045bfdb9044 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029913700Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029920629Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:04.029927167Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.023347166Z" level=info msg="NetworkStart: stopping network for sandbox fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.023507799Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/eb0d3d16-b69d-49b0-a113-fc52e093b8c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.023533913Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.023540869Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.023547720Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.024906265Z" level=info msg="NetworkStart: stopping network for sandbox bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.025059243Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/d6222470-3cdd-4d16-88b2-573cb7192c8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.025081641Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.025090617Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:06.025096848Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:12.996584 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:37:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:12.997088 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:37:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:14.021516987Z" level=info msg="NetworkStart: stopping network for sandbox 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:14.021672956Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/59be6d5f-624c-4b58-b6fd-dac7a4552cc2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:14.021696546Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:14.021703219Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:14.021709918Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.024131603Z" level=info msg="NetworkStart: stopping network for sandbox fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.024494157Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/4067b380-9e9c-487c-8e6a-2b25dc5c6a7e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.024517635Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.024524632Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.024530732Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.418867706Z" level=info msg="NetworkStart: stopping network for sandbox 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.419005446Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/43729c10-86e8-4ac5-90dc-1376b31de9cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.419025456Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.419031588Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:24.419039395Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.905984 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906006 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906013 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906021 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906027 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906033 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.906042 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:27.997774 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:37:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:27.998315 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:37:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:28.142222862Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.030572046Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.030613468Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount has successfully entered the 'dead' state. Jan 23 17:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount has successfully entered the 'dead' state. Jan 23 17:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-320bbbf2\x2d0cfe\x2d450e\x2da246\x2d0134df47e99e.mount has successfully entered the 'dead' state. Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.078304168Z" level=info msg="runSandbox: deleting pod ID 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0 from idIndex" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.078328641Z" level=info msg="runSandbox: removing pod sandbox 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.078342118Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.078356619Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.094421872Z" level=info msg="runSandbox: removing pod sandbox from storage: 47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.097339013Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.097358100Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=cac4fcc9-45da-474e-bafb-72d0f090191d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:35.097583 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:35.097743 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:35.097772 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:35.097833 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(47a5143eaaf57ec0e455c92190d2673d0ff25d3873324276b4e85190952cdda0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.438591854Z" level=info msg="NetworkStart: stopping network for sandbox 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.438732324Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/0ff52af8-5913-448b-81a4-70a1ad356330 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.438755246Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.438761564Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:35.438768024Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.030403212Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.030438784Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount has successfully entered the 'dead' state. Jan 23 17:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount has successfully entered the 'dead' state. Jan 23 17:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-167c30f5\x2d0fe5\x2d4276\x2d9684\x2d1f8f0d03cd46.mount has successfully entered the 'dead' state. Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.063305161Z" level=info msg="runSandbox: deleting pod ID 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37 from idIndex" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.063333730Z" level=info msg="runSandbox: removing pod sandbox 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.063348271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.063360685Z" level=info msg="runSandbox: unmounting shmPath for sandbox 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.076442376Z" level=info msg="runSandbox: removing pod sandbox from storage: 81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.079850487Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:36.079868046Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=8687eeb5-60e8-4aa7-8cdf-2c8b5a269d36 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:36.080078 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:36.080122 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:36.080149 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:36.080204 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(81b4df4b41f8d14ed2afe612b48d1d450f0e5f7231390c62742fe87f4207ab37): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.032774263Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.032807533Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount has successfully entered the 'dead' state. Jan 23 17:37:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount has successfully entered the 'dead' state. Jan 23 17:37:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-efc972de\x2dbf27\x2d481a\x2da1bb\x2d34aeee53a60a.mount has successfully entered the 'dead' state. Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.073308780Z" level=info msg="runSandbox: deleting pod ID c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f from idIndex" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.073333572Z" level=info msg="runSandbox: removing pod sandbox c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.073347105Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.073358770Z" level=info msg="runSandbox: unmounting shmPath for sandbox c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.089441066Z" level=info msg="runSandbox: removing pod sandbox from storage: c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.092815641Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:38.092833383Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=c4253491-313c-4b92-be11-56c0aa1cb396 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:38.093035 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:38.093076 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:37:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:38.093098 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:37:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:38.093140 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c4af0ca19f2b355ecae6ab956f454d80556586ff4753cbb832d39bebd83ec50f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:37:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:39.996912 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:37:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:39.997433 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.034136445Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.034172253Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount has successfully entered the 'dead' state. Jan 23 17:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount has successfully entered the 'dead' state. Jan 23 17:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-56ce5ad2\x2d4c65\x2d498c\x2d83d9\x2d59a4740846d0.mount has successfully entered the 'dead' state. Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.078339673Z" level=info msg="runSandbox: deleting pod ID 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208 from idIndex" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.078363009Z" level=info msg="runSandbox: removing pod sandbox 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.078375934Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.078387846Z" level=info msg="runSandbox: unmounting shmPath for sandbox 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.089463363Z" level=info msg="runSandbox: removing pod sandbox from storage: 79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.092934411Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:40.092953320Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c166f679-8182-428f-83b1-34173e2dacdc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:40.093143 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:40.093180 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:40.093200 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:37:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:40.093247 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(79e49a865a4cdee091ee88844d4c0fa41b06f47390d47f90c0b745fbde717208): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.031104350Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.031143444Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount has successfully entered the 'dead' state. Jan 23 17:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount has successfully entered the 'dead' state. Jan 23 17:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-59b0bf4f\x2d7af4\x2d49bd\x2d8062\x2d38f7e3544601.mount has successfully entered the 'dead' state. Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.072306100Z" level=info msg="runSandbox: deleting pod ID 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed from idIndex" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.072333506Z" level=info msg="runSandbox: removing pod sandbox 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.072347692Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.072360526Z" level=info msg="runSandbox: unmounting shmPath for sandbox 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.088429216Z" level=info msg="runSandbox: removing pod sandbox from storage: 58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.091920925Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.091939259Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=efc0932b-6dd2-4fe8-abb6-57db2f305997 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:45.092159 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:45.092212 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:45.092241 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:37:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:45.092287 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(58a584da782af1bfc57a17bbb36d9ccd61db0a218b99e2239b9de4555c0e33ed): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.468954333Z" level=info msg="NetworkStart: stopping network for sandbox b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469120400Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/4c8e77d4-e032-4b98-8388-d899f0ece0b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469146658Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469153907Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469160654Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469327876Z" level=info msg="NetworkStart: stopping network for sandbox 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469477533Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/e6d31fb4-972f-426f-b817-59962e3532e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469502501Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469508930Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.469515258Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479645591Z" level=info msg="NetworkStart: stopping network for sandbox a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479658049Z" level=info msg="NetworkStart: stopping network for sandbox 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479791856Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/3a90076a-d858-4223-bb57-776c31ae0fe1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479815980Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479823362Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479830306Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479862630Z" level=info msg="NetworkStart: stopping network for sandbox 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479893539Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/77c2dd0f-9039-4e86-bbe1-2112ad27650f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479921800Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479933905Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479944444Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479970052Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5606b344-e3a5-43ba-9460-91461142f80d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479990692Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.479996928Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:37:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:45.480002834Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:48.995665 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:48.995768 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:48.996028753Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:48.996069814Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:48.996151731Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:48.996183430Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.017090999Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a0e21aa9-6193-4796-88a3-b35be282e2b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.017122190Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.017239021Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/49b40744-14b0-4ba6-b748-c0e6f13ac97a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.017260829Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.040082444Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.040116826Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.040625695Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.040651623Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount has successfully entered the 'dead' state. Jan 23 17:37:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount has successfully entered the 'dead' state. Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.083304653Z" level=info msg="runSandbox: deleting pod ID 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20 from idIndex" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.083329959Z" level=info msg="runSandbox: removing pod sandbox 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.083343971Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.083375837Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.087277737Z" level=info msg="runSandbox: deleting pod ID 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb from idIndex" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.087302305Z" level=info msg="runSandbox: removing pod sandbox 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.087314555Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.087325062Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.095442798Z" level=info msg="runSandbox: removing pod sandbox from storage: 5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.098264126Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.098281911Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=47b9d512-98c0-4176-808f-2690904227cd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.098452 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.098496 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.098520 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.098570 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.103440123Z" level=info msg="runSandbox: removing pod sandbox from storage: 6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.106665296Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:49.106684327Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1a43d627-487a-4261-9bfb-065b5d22e40d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.106872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.106913 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.106935 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:37:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:49.106983 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-325d982a\x2d6329\x2d4727\x2d901b\x2d88774912f7e8.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7b0798ee\x2d398f\x2d46bf\x2d9e56\x2d5045bfdb9044.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5017b453948da796a8b8019d4aee09410a039d2e6bf2ea7a9176e51788f85a20-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6470cf65c622934f715b1d97851a3e73e06ba3cb21f517961a9a5975461f72fb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:50.995503 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:37:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:50.995831783Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:50.995869934Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.006821685Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/204dc27a-5abb-4d4d-93e3-975bdc6fd014 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.006843684Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.034578791Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.034620052Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.034899965Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.034936617Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d6222470\x2d3cdd\x2d4d16\x2d88b2\x2d573cb7192c8c.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-eb0d3d16\x2db69d\x2d49b0\x2da113\x2dfc52e093b8c0.mount has successfully entered the 'dead' state. Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079290242Z" level=info msg="runSandbox: deleting pod ID bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4 from idIndex" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079320652Z" level=info msg="runSandbox: removing pod sandbox bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079341489Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079357218Z" level=info msg="runSandbox: unmounting shmPath for sandbox bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079294263Z" level=info msg="runSandbox: deleting pod ID fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd from idIndex" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079404404Z" level=info msg="runSandbox: removing pod sandbox fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079419321Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.079435844Z" level=info msg="runSandbox: unmounting shmPath for sandbox fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.095483913Z" level=info msg="runSandbox: removing pod sandbox from storage: bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.095494184Z" level=info msg="runSandbox: removing pod sandbox from storage: fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.098327663Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.098348358Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=aa03cb74-2817-49d4-8c64-2b071a2cb80e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.098605 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.098649 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.098673 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.098722 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.101546890Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.101565306Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=30cb40b1-d5d1-4244-8d05-ed78f49bee31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.101736 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.101769 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.101791 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.101830 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:51.996232 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:51.996645 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.996586282Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:51.996624568Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:51.997147 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:52.007607331Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/41a9f8a4-508a-4d90-a350-084a39c810b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:52.007628062Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bc3bcf5f93e54301b5dcf64b17dc6d4adb5d706513dac4ad2f26b6537e40d1d4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fe7d9d398e2a93182aaea9be1b2cac67064e6451d4062ffc845327a8e228f8bd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:37:57.996738 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:37:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:57.997164250Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:57.997225189Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:58.008244803Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/bd48f716-4a72-49f1-9853-b81ce532ee7b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:58.008268656Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:37:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:58.145619045Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.032462833Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.032498758Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount has successfully entered the 'dead' state. Jan 23 17:37:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount has successfully entered the 'dead' state. Jan 23 17:37:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-59be6d5f\x2d624c\x2d4b58\x2db6fd\x2ddac7a4552cc2.mount has successfully entered the 'dead' state. Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.080306848Z" level=info msg="runSandbox: deleting pod ID 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0 from idIndex" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.080330949Z" level=info msg="runSandbox: removing pod sandbox 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.080347130Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.080359898Z" level=info msg="runSandbox: unmounting shmPath for sandbox 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.105441646Z" level=info msg="runSandbox: removing pod sandbox from storage: 17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.108335670Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:37:59.108354820Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e7896f2a-6110-489d-a4e4-5e8b6411a39d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:37:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:59.108574 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:37:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:59.108740 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:59.108763 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:37:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:37:59.108813 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(17b49b1f24dd7c5cc0ed91dfd5a4a05c1b240c571eacd055f3f1b6af8ab67bf0): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:38:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:00.996687 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:38:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:00.997002459Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:00.997041077Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:01.008700437Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7ab2002f-3ac3-4466-9623-1e56b75e24d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:01.008725157Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:02.996886 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:38:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:02.997105 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:38:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:02.997221187Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:02.997265111Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:02.997585 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:03.008133518Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/81dd55e0-99bc-4274-8e98-d26c114a7e8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:03.008159661Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:04.995883 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:38:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:04.995896 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:38:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:04.996226578Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:04.996262306Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:04.996350053Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:04.996380841Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:05.010422217Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a2ca8552-f19a-472e-a5b3-c0b8340526bd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:05.010441990Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:05.010963357Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3c1e13d3-c590-4231-925c-3fabac789d7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:05.010985193Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1260] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1264] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1265] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1522] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1524] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1536] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1538] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1539] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1541] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1544] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:38:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495488.1548] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.035187946Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.035432087Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount has successfully entered the 'dead' state. Jan 23 17:38:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount has successfully entered the 'dead' state. Jan 23 17:38:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4067b380\x2d9e9c\x2d487c\x2d8e6a\x2d2b25dc5c6a7e.mount has successfully entered the 'dead' state. Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.077279018Z" level=info msg="runSandbox: deleting pod ID fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c from idIndex" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.077306116Z" level=info msg="runSandbox: removing pod sandbox fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.077319638Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.077330748Z" level=info msg="runSandbox: unmounting shmPath for sandbox fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.103428222Z" level=info msg="runSandbox: removing pod sandbox from storage: fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.106698318Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.106718653Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=0c509e1b-399b-406a-af91-5917a8f75734 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.106945 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.106994 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.107020 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.107071 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(fe73008696a1313b574f0a6f1b9dc004c6fb76ae5499e05831aa03c96c33907c): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.429645953Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.429677240Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount has successfully entered the 'dead' state. Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.472304105Z" level=info msg="runSandbox: deleting pod ID 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721 from idIndex" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.472329936Z" level=info msg="runSandbox: removing pod sandbox 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.472341949Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.472354778Z" level=info msg="runSandbox: unmounting shmPath for sandbox 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.484462383Z" level=info msg="runSandbox: removing pod sandbox from storage: 482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.487669863Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.487687890Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=b1976a46-d949-4c82-8b3d-177b1cf1dcb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.487882 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.487920 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.487941 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:09.487987 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:38:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:09.567222 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.567526156Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.567558981Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.577896569Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b4892cf8-f142-4a0a-adb1-e2ce047deb1f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:09.577922743Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495489.8086] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:38:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount has successfully entered the 'dead' state. Jan 23 17:38:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-43729c10\x2d86e8\x2d4ac5\x2d90dc\x2d1376b31de9cc.mount has successfully entered the 'dead' state. Jan 23 17:38:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-482e7a815e3288e632ef45e7c7e1827c5997b24ba2f53404326454969164b721-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:13.995758 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:38:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:13.996083525Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:13.996122335Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:14.007452376Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2c03b2e8-b4c5-4520-906f-36bba6c19def Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:14.007473681Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:15.997065 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:38:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:15.997613 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.449374045Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.449415517Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount has successfully entered the 'dead' state. Jan 23 17:38:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount has successfully entered the 'dead' state. Jan 23 17:38:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0ff52af8\x2d5913\x2d448b\x2d81a4\x2d70a1ad356330.mount has successfully entered the 'dead' state. Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.499323386Z" level=info msg="runSandbox: deleting pod ID 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28 from idIndex" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.499356427Z" level=info msg="runSandbox: removing pod sandbox 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.499373934Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.499388057Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.512465612Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.515471014Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.515491078Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=fe9fdb30-db24-419e-b2fc-9851883b8fb1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:20.515679 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:20.515728 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:20.515765 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:20.515814 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(9d33491366ac0cd2bb70c83913e9ef2523e13e916a6b68c63eaf8f7892742c28): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:38:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:20.594260 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.594588337Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.594622091Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.605718042Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c73ed2b7-ae94-4040-b1d6-779b869033e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:20.605738771Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:23.996438 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:23.996876743Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:23.996915112Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:24.007733590Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/27911485-0639-47d8-b942-735f1e419f20 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:24.007759576Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906802 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906820 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906827 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906833 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906840 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906846 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:27.906854 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:38:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:28.143024769Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:38:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:29.997020 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.997863769Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=07216f8f-004b-49d6-aa99-a473f6900ef8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.998026891Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=07216f8f-004b-49d6-aa99-a473f6900ef8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.998503047Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=bfd1909c-6205-4369-8d78-a88f05c936aa name=/runtime.v1.ImageService/ImageStatus Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.998616838Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bfd1909c-6205-4369-8d78-a88f05c936aa name=/runtime.v1.ImageService/ImageStatus Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.999622274Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=06299642-5657-44cc-8bed-ef4c843c5bc1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:38:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:29.999696576Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope. -- Subject: Unit crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927. -- Subject: Unit crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.115918151Z" level=info msg="Created container d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=06299642-5657-44cc-8bed-ef4c843c5bc1 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.116468856Z" level=info msg="Starting container: d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" id=edc5562b-72f1-4ea8-9090-0cad76ceb4bb name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.135626162Z" level=info msg="Started container" PID=164213 containerID=d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=edc5562b-72f1-4ea8-9090-0cad76ceb4bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.140419213Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.150350283Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.150368511Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.150380758Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.159286886Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.159308559Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.159319615Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.168045085Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.168063129Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.168072709Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.176543171Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.176559517Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.176568133Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.184678834Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.184693403Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.480569798Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.480618018Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.481035431Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.481071438Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount has successfully entered the 'dead' state. Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.490803623Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.490831844Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.490901260Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.490932017Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.490975506Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.491008406Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.547423758Z" level=info msg="runSandbox: deleting pod ID b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded from idIndex" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.547462190Z" level=info msg="runSandbox: removing pod sandbox b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.547481744Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.547503973Z" level=info msg="runSandbox: unmounting shmPath for sandbox b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.550336695Z" level=info msg="runSandbox: deleting pod ID a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09 from idIndex" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.550370428Z" level=info msg="runSandbox: removing pod sandbox a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.550385768Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.550399008Z" level=info msg="runSandbox: unmounting shmPath for sandbox a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.555294874Z" level=info msg="runSandbox: deleting pod ID 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be from idIndex" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.555322591Z" level=info msg="runSandbox: removing pod sandbox 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.555335124Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.555346666Z" level=info msg="runSandbox: unmounting shmPath for sandbox 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.555485137Z" level=info msg="runSandbox: removing pod sandbox from storage: b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.556416466Z" level=info msg="runSandbox: deleting pod ID 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd from idIndex" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.556445445Z" level=info msg="runSandbox: removing pod sandbox 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.556457965Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.556470940Z" level=info msg="runSandbox: unmounting shmPath for sandbox 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.558698344Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.558719277Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=fccabedd-d6b3-4489-8351-22fb33a71f68 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.559059 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.559198 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.559226 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.559272 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.563289519Z" level=info msg="runSandbox: deleting pod ID 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953 from idIndex" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.563317711Z" level=info msg="runSandbox: removing pod sandbox 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.563330577Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.563342384Z" level=info msg="runSandbox: unmounting shmPath for sandbox 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.567509697Z" level=info msg="runSandbox: removing pod sandbox from storage: a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.571280913Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.571302658Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=4c83529c-b7f9-4647-bc09-c0ad7ba6633e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.571471824Z" level=info msg="runSandbox: removing pod sandbox from storage: 57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.571495 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.571528 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.571549 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.571483258Z" level=info msg="runSandbox: removing pod sandbox from storage: 238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.571586 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.574862881Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.574885954Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=45f504ae-e5f9-442f-86de-a504c8748a49 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.575119 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.575151 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.575177 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.575220 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.578251388Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.578270207Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=4eb25cc8-98b2-48aa-85fd-82f6d61043a5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.578405 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.578451 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.578477 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.578534 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.579437292Z" level=info msg="runSandbox: removing pod sandbox from storage: 650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.582482516Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.582499857Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=139be02c-99da-4600-908c-67ce13ac5993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.582708 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.582740 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.582762 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:30.582797 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.612717 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/194.log" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614165 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927} Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614314 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614659 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614695 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614710480Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614744748Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614663 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.614742 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614937608Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614961311Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614992188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.615016190Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.615032655Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.615073589Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.615106602Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.614975364Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.646179700Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/64518841-d256-4169-85d1-b33bf1e52654 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.646209682Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.646353509Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5a6f624c-1e79-4ec7-880b-12beba0032b9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.646372346Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.648043712Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7219d756-ec09-40c2-b55d-9886e24ca10c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.648066040Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.650990534Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3433445d-efa2-4622-8128-4a53381338bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.651013808Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.652473051Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/2e40f522-f787-4089-8933-fb904a2653eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:30.652498128Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:30.667741 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:38:30 hub-master-0.workload.bos2.lab conmon[164195]: conmon d4e19a3827626f411cd7 : container 164213 exited with status 1 Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has successfully entered the 'dead' state. Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope: Consumed 569ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope completed and consumed the indicated resources. Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope has successfully entered the 'dead' state. Jan 23 17:38:30 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope: Consumed 46ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927.scope completed and consumed the indicated resources. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5606b344\x2de3a5\x2d43ba\x2d9460\x2d91461142f80d.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-77c2dd0f\x2d9039\x2d4e86\x2dbbe1\x2d2112ad27650f.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3a90076a\x2dd858\x2d4223\x2dbb57\x2d776c31ae0fe1.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-238b9d85ba4e0e42b20433bfd2600a6c7cebcf8fe12819b54fb53098b01d06cd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-650d16542b14a475c7c4c610f82b13fd897a9636ae808ad0c2e0be81852f5953-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a86178b8b6a0ba1a871233139498861a549d058c3acef4cea2fa74be89e72b09-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4c8e77d4\x2de032\x2d4b98\x2d8388\x2dd899f0ece0b2.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e6d31fb4\x2d972f\x2d426f\x2db817\x2d59962e3532e4.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b59b30f15270c1a0492dabc50d3f3fb8b8e894d0bf1cd022b941f9d19ca1aded-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-57708db0223299bf19785395ac9742e677226ff80513dabc0829e420bd7009be-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.618370 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/195.log" Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.618991 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/194.log" Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.620030 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" exitCode=1 Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.620052 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927} Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.620075 8631 scope.go:115] "RemoveContainer" containerID="98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" Jan 23 17:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:31.621012554Z" level=info msg="Removing container: 98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca" id=2503e1de-d245-48e3-9f6a-11de7c83594d name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:31.621051 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:38:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:31.621580 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:38:31 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-ce0fa0dcc2bbe4a2e2154d920b462733d086cad5ec48c745c4ba9626ea3be7c5-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-ce0fa0dcc2bbe4a2e2154d920b462733d086cad5ec48c745c4ba9626ea3be7c5-merged.mount has successfully entered the 'dead' state. Jan 23 17:38:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:31.660708945Z" level=info msg="Removed container 98f5e27fc85c63cedbcf5d5186b39c0186a2498271810549f6aa3275033df6ca: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2503e1de-d245-48e3-9f6a-11de7c83594d name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:38:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:32.623395 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/195.log" Jan 23 17:38:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:32.625445 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:38:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:32.625960 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.030313162Z" level=info msg="NetworkStart: stopping network for sandbox 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.030654161Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/49b40744-14b0-4ba6-b748-c0e6f13ac97a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.030679003Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.030686047Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.030692837Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.032342772Z" level=info msg="NetworkStart: stopping network for sandbox 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.032441196Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a0e21aa9-6193-4796-88a3-b35be282e2b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.032461884Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.032468742Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:34.032474569Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:36.019379021Z" level=info msg="NetworkStart: stopping network for sandbox 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:36.019515475Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/204dc27a-5abb-4d4d-93e3-975bdc6fd014 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:36.019537449Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:36.019544281Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:36.019550954Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:37.018976158Z" level=info msg="NetworkStart: stopping network for sandbox af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:37.019111810Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/41a9f8a4-508a-4d90-a350-084a39c810b8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:37.019133659Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:37.019140114Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:37.019146257Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:43.020627492Z" level=info msg="NetworkStart: stopping network for sandbox 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:43.020765135Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/bd48f716-4a72-49f1-9853-b81ce532ee7b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:43.020785498Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:43.020792317Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:43.020798744Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:46.022687934Z" level=info msg="NetworkStart: stopping network for sandbox 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:46.022831910Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/7ab2002f-3ac3-4466-9623-1e56b75e24d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:46.022855284Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:46.022861693Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:46.022868437Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:38:47.996869 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:38:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:38:47.997486 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:48.022209743Z" level=info msg="NetworkStart: stopping network for sandbox e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:48.022380197Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/81dd55e0-99bc-4274-8e98-d26c114a7e8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:48.022407891Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:48.022415312Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:48.022422878Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.023431363Z" level=info msg="NetworkStart: stopping network for sandbox f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.023573812Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/a2ca8552-f19a-472e-a5b3-c0b8340526bd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.023596852Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.023603147Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.023609896Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.024619011Z" level=info msg="NetworkStart: stopping network for sandbox d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.024717258Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/3c1e13d3-c590-4231-925c-3fabac789d7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.024736484Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.024743132Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:50.024748636Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:54.591526385Z" level=info msg="NetworkStart: stopping network for sandbox c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:54.591717563Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b4892cf8-f142-4a0a-adb1-e2ce047deb1f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:54.591746072Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:54.591752908Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:54.591761345Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:38:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:58.141590853Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:38:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:59.019604767Z" level=info msg="NetworkStart: stopping network for sandbox a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:38:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:59.019748203Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/2c03b2e8-b4c5-4520-906f-36bba6c19def Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:38:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:59.019774750Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:38:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:59.019782481Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:38:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:38:59.019788764Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:00.996338 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:39:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:00.996872 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:05.618754133Z" level=info msg="NetworkStart: stopping network for sandbox 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:05.618905122Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c73ed2b7-ae94-4040-b1d6-779b869033e3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:05.618932907Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:05.618939937Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:05.618947890Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:09.020485707Z" level=info msg="NetworkStart: stopping network for sandbox 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:09.020648031Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/27911485-0639-47d8-b942-735f1e419f20 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:09.020673881Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:09.020681802Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:09.020689784Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.659464538Z" level=info msg="NetworkStart: stopping network for sandbox b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.659660921Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5a6f624c-1e79-4ec7-880b-12beba0032b9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.659684785Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.659691026Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.659697512Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.660719493Z" level=info msg="NetworkStart: stopping network for sandbox 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.660825767Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/64518841-d256-4169-85d1-b33bf1e52654 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.660846561Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.660852727Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.660858749Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.662033388Z" level=info msg="NetworkStart: stopping network for sandbox e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.662203605Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7219d756-ec09-40c2-b55d-9886e24ca10c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.662246433Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.662255580Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.662263186Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.663951840Z" level=info msg="NetworkStart: stopping network for sandbox 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664054203Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/3433445d-efa2-4622-8128-4a53381338bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664075456Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664083108Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664091125Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664811194Z" level=info msg="NetworkStart: stopping network for sandbox cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664932544Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/2e40f522-f787-4089-8933-fb904a2653eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664952762Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664959252Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:39:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:15.664965141Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:15.996642 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:39:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:15.997149 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.042163902Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.042214821Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.042884030Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.042915532Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount has successfully entered the 'dead' state. Jan 23 17:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount has successfully entered the 'dead' state. Jan 23 17:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount has successfully entered the 'dead' state. Jan 23 17:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount has successfully entered the 'dead' state. Jan 23 17:39:19 hub-master-0.workload.bos2.lab systemd[1]: run-netns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-49b40744\x2d14b0\x2d4ba6\x2db748\x2dc0e6f13ac97a.mount has successfully entered the 'dead' state. Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090370583Z" level=info msg="runSandbox: deleting pod ID 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0 from idIndex" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090374953Z" level=info msg="runSandbox: deleting pod ID 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad from idIndex" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090411172Z" level=info msg="runSandbox: removing pod sandbox 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090437971Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090452632Z" level=info msg="runSandbox: unmounting shmPath for sandbox 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090476357Z" level=info msg="runSandbox: removing pod sandbox 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090490688Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.090503610Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.110439660Z" level=info msg="runSandbox: removing pod sandbox from storage: 0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.110441611Z" level=info msg="runSandbox: removing pod sandbox from storage: 755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.114190648Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.114215056Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=69adb278-dc96-4356-a72f-1818c56bde75 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.114432 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.114583 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.114607 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.114656 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.117814820Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:19.117840543Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5f3b7c2b-12e8-483b-8d4d-726691b8f3e3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.118027 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.118061 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.118084 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:19.118124 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:39:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a0e21aa9\x2d6193\x2d4796\x2d88a3\x2db35be282e2b6.mount has successfully entered the 'dead' state. Jan 23 17:39:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-755ec691d634857fbc7fdf3f981d49150f46add6d29f77f69b1898a358c17fd0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0545bb070dab02b2103d8912abe6951112e3ce8dd5a2fbf97a41946057eccdad-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.029501901Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.029537213Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount has successfully entered the 'dead' state. Jan 23 17:39:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount has successfully entered the 'dead' state. Jan 23 17:39:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-204dc27a\x2d5abb\x2d4d4d\x2d93e3\x2d975bdc6fd014.mount has successfully entered the 'dead' state. Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.088302793Z" level=info msg="runSandbox: deleting pod ID 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb from idIndex" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.088329666Z" level=info msg="runSandbox: removing pod sandbox 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.088344074Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.088356328Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.108435128Z" level=info msg="runSandbox: removing pod sandbox from storage: 6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.112031213Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:21.112048529Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=1d06ea94-5e12-4d76-a2cb-d8fcff5a18cc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:21.112425 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:21.112470 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:39:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:21.112492 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:39:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:21.112536 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(6ef82f1c7ea8e29c1d1c9b24ab5311b7bc9d93685f1aa2efa86c618c819973fb): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.030822042Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.030855938Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount has successfully entered the 'dead' state. Jan 23 17:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount has successfully entered the 'dead' state. Jan 23 17:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-41a9f8a4\x2d508a\x2d4d90\x2da350\x2d084a39c810b8.mount has successfully entered the 'dead' state. Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.073302196Z" level=info msg="runSandbox: deleting pod ID af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17 from idIndex" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.073327260Z" level=info msg="runSandbox: removing pod sandbox af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.073340471Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.073354336Z" level=info msg="runSandbox: unmounting shmPath for sandbox af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.093437964Z" level=info msg="runSandbox: removing pod sandbox from storage: af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.096926632Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:22.096944093Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=c4d01415-20a2-4cc3-b3b5-789ff660b843 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:22.097141 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:22.097184 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:22.097217 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:39:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:22.097271 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(af5f8045f57104636b914b15e051dfd78894b8b3ab9d2ef2e0a8636384caca17): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907701 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907730 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907736 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907745 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907752 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907759 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:27.907766 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.032869516Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.032904655Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount has successfully entered the 'dead' state. Jan 23 17:39:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount has successfully entered the 'dead' state. Jan 23 17:39:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bd48f716\x2d4a72\x2d49f1\x2d9853\x2db81ce532ee7b.mount has successfully entered the 'dead' state. Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.071277698Z" level=info msg="runSandbox: deleting pod ID 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8 from idIndex" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.071302535Z" level=info msg="runSandbox: removing pod sandbox 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.071317477Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.071331537Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.091436136Z" level=info msg="runSandbox: removing pod sandbox from storage: 20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.094878086Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.094895484Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=bf4e6f0d-e194-4298-a412-4fa895ead054 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:28.095152 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:28.095201 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:28.095230 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:28.095280 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(20ef39f445719f622b4edcbf240c6d1b1f8c112ef8fd53df26fe4baccba530d8): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:39:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:28.141713128Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:28.996991 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:39:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:28.997500 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:39:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:30.995569 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:30.995938378Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:30.995979090Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.007896166Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/8ba8adc7-5066-49ed-8112-096600d17a78 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.007918193Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.032995006Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.033026481Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount has successfully entered the 'dead' state. Jan 23 17:39:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount has successfully entered the 'dead' state. Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.073301273Z" level=info msg="runSandbox: deleting pod ID 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04 from idIndex" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.073323920Z" level=info msg="runSandbox: removing pod sandbox 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.073335830Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.073346651Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.094398054Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.097259310Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.097276913Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=2aeea013-80cb-4810-ba56-1056d55cbe9c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:31.097469 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:31.097506 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:39:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:31.097529 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:39:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:31.097575 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:39:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7ab2002f\x2d3ac3\x2d4466\x2d9623\x2d1e56b75e24d9.mount has successfully entered the 'dead' state. Jan 23 17:39:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7b23280d47dae2195009621c0ec207565edd73d7ed1f128ed7d0540df044ac04-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:31.995760 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.996077327Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:31.996114693Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.006655336Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e14a78ac-7e32-433a-9418-5c5dc447ab02 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.006675507Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:32.995879 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:39:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:32.996092 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.996242080Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.996278804Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.996361228Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:32.996393453Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.015375796Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f5c195a1-00bc-455a-b80f-f7d5fa36f46d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.015593163Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.015499458Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/a010c45d-b982-4d2c-9b8d-b1e3eec49007 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.015672554Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.032629355Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.032667000Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount has successfully entered the 'dead' state. Jan 23 17:39:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount has successfully entered the 'dead' state. Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.082280638Z" level=info msg="runSandbox: deleting pod ID e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a from idIndex" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.082306719Z" level=info msg="runSandbox: removing pod sandbox e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.082322506Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.082339620Z" level=info msg="runSandbox: unmounting shmPath for sandbox e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.102436577Z" level=info msg="runSandbox: removing pod sandbox from storage: e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.105157274Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:33.105176146Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=179defe8-6406-4203-a323-abc96fae46dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:33.105440 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:33.105487 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:39:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:33.105508 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:39:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:33.105554 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:39:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-81dd55e0\x2d99bc\x2d4274\x2d8e98\x2dd26c114a7e8d.mount has successfully entered the 'dead' state. Jan 23 17:39:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e2d4e3f383dbc02ae833df972a0a3e4f781217dd25f5d2a0d82a9ae7800f395a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.035286487Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.035337279Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.036038756Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.036068402Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c1e13d3\x2dc590\x2d4231\x2d925c\x2d3fabac789d7f.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a2ca8552\x2df19a\x2d472e\x2da5b3\x2dc0b8340526bd.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087303729Z" level=info msg="runSandbox: deleting pod ID d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b from idIndex" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087327705Z" level=info msg="runSandbox: removing pod sandbox d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087342187Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087354159Z" level=info msg="runSandbox: unmounting shmPath for sandbox d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087414586Z" level=info msg="runSandbox: deleting pod ID f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313 from idIndex" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087443017Z" level=info msg="runSandbox: removing pod sandbox f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087461193Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.087473507Z" level=info msg="runSandbox: unmounting shmPath for sandbox f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.101409425Z" level=info msg="runSandbox: removing pod sandbox from storage: f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.104956694Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.104975518Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=66d3597e-286d-4ddd-968d-3cd0f65d0f7b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.105199 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.105248 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.105271 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.105321 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(f14e505d899f78b00405cd1beca7accc47446711143ba0f6506bcdda1b8e6313): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.108434340Z" level=info msg="runSandbox: removing pod sandbox from storage: d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.111624462Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:35.111644090Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=73fde3bc-ae14-4424-ab77-29c480e15608 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.111837 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.111869 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.111890 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:35.111929 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d74b1bbb5c182aac74c2e516668e7a646d279c82138ed5db2f7e08a550fd219b): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495578.1182] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495578.1186] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495578.1188] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495578.1200] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:39:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495578.1201] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.603047875Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.603082969Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount has successfully entered the 'dead' state. Jan 23 17:39:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount has successfully entered the 'dead' state. Jan 23 17:39:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b4892cf8\x2df142\x2d4a0a\x2dadb1\x2de2ce047deb1f.mount has successfully entered the 'dead' state. Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.652314080Z" level=info msg="runSandbox: deleting pod ID c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70 from idIndex" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.652339533Z" level=info msg="runSandbox: removing pod sandbox c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.652355112Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.652367817Z" level=info msg="runSandbox: unmounting shmPath for sandbox c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.669475684Z" level=info msg="runSandbox: removing pod sandbox from storage: c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.673203758Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.673228158Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=600fd695-4011-49de-b0e9-f235db659993 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:39.673388 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:39.673432 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:39.673453 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:39.673500 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c3da5945e87087bd507291130b700b81c1ecd295f19ada78992ccc831bde2b70): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:39:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:39.752360 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.752582554Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.752623659Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.764613597Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a7555a51-2474-4170-ae24-15573fba7b02 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:39.764636272Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:42.995781 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:39:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:42.996263545Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:42.996305806Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:42.996469 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:39:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:42.996983 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:39:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:43.007471857Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ca2f5ed4-02ee-46cc-b402-bf70522b17ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:43.007491576Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.031300856Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.031334185Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount has successfully entered the 'dead' state. Jan 23 17:39:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount has successfully entered the 'dead' state. Jan 23 17:39:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2c03b2e8\x2db4c5\x2d4520\x2d906f\x2d36bba6c19def.mount has successfully entered the 'dead' state. Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.080273656Z" level=info msg="runSandbox: deleting pod ID a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83 from idIndex" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.080298126Z" level=info msg="runSandbox: removing pod sandbox a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.080311796Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.080328245Z" level=info msg="runSandbox: unmounting shmPath for sandbox a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.096393962Z" level=info msg="runSandbox: removing pod sandbox from storage: a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.099334216Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.099353059Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=b33dd8a3-8467-4402-b638-ad4cb70c55c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:44.099567 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:44.099608 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:44.099631 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:44.099675 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a41ae0369143bf62f7d0d4f791dd8aa5e30c18ee2d7b20a70ad0a7d1ef922e83): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:39:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:44.996309 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.996653061Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:44.996705031Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:45.011776483Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/63b624b2-2641-4f76-b02f-43e908d52ba9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:45.011803347Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:45.996513 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:39:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:45.996830017Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:45.996870200Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:46.007001905Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b9367f39-7d0f-4835-849a-7310097d5c50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:46.007025568Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:46.995930 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:39:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:46.996311248Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:46.996348989Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:47.006540853Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/36e5daea-5e52-4738-a9c3-3710b0889596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:47.006563312Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:47.996135 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:47.996460789Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:47.996494501Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:48.007804754Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/0894d3e7-4484-4bdd-a0e2-53677ab690ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:48.007824403Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.629947718Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.629989876Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount has successfully entered the 'dead' state. Jan 23 17:39:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount has successfully entered the 'dead' state. Jan 23 17:39:50 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c73ed2b7\x2dae94\x2d4040\x2db1d6\x2d779b869033e3.mount has successfully entered the 'dead' state. Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.664308878Z" level=info msg="runSandbox: deleting pod ID 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3 from idIndex" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.664334992Z" level=info msg="runSandbox: removing pod sandbox 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.664350032Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.664364342Z" level=info msg="runSandbox: unmounting shmPath for sandbox 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.680464368Z" level=info msg="runSandbox: removing pod sandbox from storage: 93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.683436616Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.683454770Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=641667fe-b517-4547-afc0-07e6eede06e6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:50.683681 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:50.683732 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:50.683756 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:50.683811 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(93493f8abb62065f94bc6339108ac66a949c3693df9d6e1c643bd91e8136e6d3): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:39:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:50.772997 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.773226003Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.773257310Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.783547006Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/0d3f7057-a8a1-4dee-a50b-c5b8abe0238e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:50.783566297Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.031741612Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.031996389Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount has successfully entered the 'dead' state. Jan 23 17:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount has successfully entered the 'dead' state. Jan 23 17:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-27911485\x2d0639\x2d47d8\x2db942\x2d735f1e419f20.mount has successfully entered the 'dead' state. Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.082278818Z" level=info msg="runSandbox: deleting pod ID 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79 from idIndex" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.082302102Z" level=info msg="runSandbox: removing pod sandbox 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.082315151Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.082328695Z" level=info msg="runSandbox: unmounting shmPath for sandbox 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.102457391Z" level=info msg="runSandbox: removing pod sandbox from storage: 73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.105448018Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.105465877Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=2cad1897-6a4d-4e33-8079-dc6e3d637131 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:54.105679 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:39:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:54.105721 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:39:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:54.105744 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:39:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:54.105795 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(73e3de42732e62e9e590ea4e0919a65fbb7fe8d137eaa92850c6fc67f9534c79): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:39:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:54.996636 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.996932846Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:39:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:54.996976464Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:39:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:55.008799248Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/75a7a8eb-1748-4386-b9e9-a7d0913e8713 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:39:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:55.008818564Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:39:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:39:57.997545 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:39:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:39:57.998051 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:39:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:39:58.143196100Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.671524488Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.671566787Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.671812212Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.671843174Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.674475467Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.674513715Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.675647798Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.675685091Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.676054458Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.676088533Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount has successfully entered the 'dead' state. Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715322165Z" level=info msg="runSandbox: deleting pod ID 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1 from idIndex" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715350114Z" level=info msg="runSandbox: removing pod sandbox 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715364312Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715375499Z" level=info msg="runSandbox: unmounting shmPath for sandbox 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715325596Z" level=info msg="runSandbox: deleting pod ID b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de from idIndex" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715440709Z" level=info msg="runSandbox: removing pod sandbox b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715454003Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.715468054Z" level=info msg="runSandbox: unmounting shmPath for sandbox b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719296300Z" level=info msg="runSandbox: deleting pod ID cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea from idIndex" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719324814Z" level=info msg="runSandbox: removing pod sandbox cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719337527Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719349261Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719298669Z" level=info msg="runSandbox: deleting pod ID e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e from idIndex" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719420566Z" level=info msg="runSandbox: deleting pod ID 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e from idIndex" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719450167Z" level=info msg="runSandbox: removing pod sandbox 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719424261Z" level=info msg="runSandbox: removing pod sandbox e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719484412Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719498004Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719520960Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.719546743Z" level=info msg="runSandbox: unmounting shmPath for sandbox e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.729480373Z" level=info msg="runSandbox: removing pod sandbox from storage: 4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.729503347Z" level=info msg="runSandbox: removing pod sandbox from storage: b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.729503205Z" level=info msg="runSandbox: removing pod sandbox from storage: e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.730438201Z" level=info msg="runSandbox: removing pod sandbox from storage: 32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.732667083Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.732687158Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=de3635af-e617-4f77-9f86-ab4433ef41d1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.733195 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.733252 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.733278 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.733338 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.734427224Z" level=info msg="runSandbox: removing pod sandbox from storage: cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.735744617Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.735762509Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=a64520fb-067a-430a-8611-68ee932c123f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.735983 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.736024 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.736045 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.736089 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.738708805Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.738728148Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=678784ec-c0a7-4cb4-9c0d-6d49ad979869 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.738931 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.738978 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.739006 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.739057 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.741582351Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.741600252Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=3079d767-8699-435a-8052-506b97dfbf01 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.741816 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.741848 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.741868 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.741904 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.744531683Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.744550649Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=5e421e82-bc14-4c9e-a761-b594748ac82f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.744752 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.744785 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.744808 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:00.744845 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:00.790861 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:00.790931 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:00.791032 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:00.791186 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791228524Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791262947Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791302128Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791329259Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791402181Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791417999Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791453590Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:00.791279 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791477257Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791565025Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.791592339Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.816086923Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/76e97bac-4627-47b1-a884-3929006ec158 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.816108808Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.816922867Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/4e57aefd-6c77-40e3-9c52-50d372002b84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.816940774Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.817535628Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b6005f8b-3a87-4bc4-93be-e6c18a7405ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.817554346Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.820196186Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/142e815d-7bc9-4c60-b4e8-8b1a6bc5bddd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.820223689Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.821021771Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/959920ef-11e4-4296-8340-80a6fd571b1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:00.821043670Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2e40f522\x2df787\x2d4089\x2d8933\x2dfb904a2653eb.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3433445d\x2defa2\x2d4622\x2d8128\x2d4a53381338bc.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7219d756\x2dec09\x2d40c2\x2db55d\x2d9886e24ca10c.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cc8c6cdf0fd8729e3806d11c35aa04ba7e0fd90f1ce63fdd973bc689bf7c0fea-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5a6f624c\x2d1e79\x2d4ec7\x2d880b\x2d12beba0032b9.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-64518841\x2dd256\x2d4169\x2d85d1\x2db33bf1e52654.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b4742cb026d9dc703479c047f92bbc2c01b0dab97f8e8dac5eb8c978ef9bc8de-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4679aeb0b7267f91cf674feaa26e66093dd74da800b77cb5707d3e6583b2a07e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e8b92508f0fb5e904627334f04085250d88457e0d1249eec023fe542d1a6b67e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:40:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-32116def1b486e114a4a2a41efe5e8b060f2863c3ac9f12e3876450a5bc2dbe1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:40:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:05.996302 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:40:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:05.996767449Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:05.996833299Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:40:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:06.008436629Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/e37897bb-7c32-410b-b2c4-b236c2dded80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:06.008464836Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:12.996666 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:40:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:12.997240 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:16.021892503Z" level=info msg="NetworkStart: stopping network for sandbox 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:16.022238500Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/8ba8adc7-5066-49ed-8112-096600d17a78 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:16.022264710Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:16.022271697Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:16.022278271Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:17.022262652Z" level=info msg="NetworkStart: stopping network for sandbox 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:17.022402242Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e14a78ac-7e32-433a-9418-5c5dc447ab02 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:17.022423331Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:17.022429461Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:17.022436060Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029134836Z" level=info msg="NetworkStart: stopping network for sandbox 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029216660Z" level=info msg="NetworkStart: stopping network for sandbox 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029353945Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/a010c45d-b982-4d2c-9b8d-b1e3eec49007 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029369272Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/f5c195a1-00bc-455a-b80f-f7d5fa36f46d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029405492Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029415760Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029424175Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029376756Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029462351Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:18.029467966Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:24.777958575Z" level=info msg="NetworkStart: stopping network for sandbox 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:24.778104347Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a7555a51-2474-4170-ae24-15573fba7b02 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:24.778130894Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:24.778137959Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:24.778143886Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:25.000759 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:40:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:25.002142 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907850 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907877 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907883 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907892 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907898 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907906 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:27.907911 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:27.910234129Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=bea530e3-571a-4771-802c-4d7c63df56e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:40:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:27.910352136Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bea530e3-571a-4771-802c-4d7c63df56e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.021034314Z" level=info msg="NetworkStart: stopping network for sandbox 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.021170971Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/ca2f5ed4-02ee-46cc-b402-bf70522b17ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.021192751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.021200075Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.021211924Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:28.141848147Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:40:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:30.025533490Z" level=info msg="NetworkStart: stopping network for sandbox 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:30.025714326Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/63b624b2-2641-4f76-b02f-43e908d52ba9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:30.025742735Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:30.025750382Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:30.025757834Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:31.021001995Z" level=info msg="NetworkStart: stopping network for sandbox 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:31.021134955Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b9367f39-7d0f-4835-849a-7310097d5c50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:31.021156127Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:31.021162852Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:31.021169289Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:32.020359997Z" level=info msg="NetworkStart: stopping network for sandbox fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:32.020494204Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/36e5daea-5e52-4738-a9c3-3710b0889596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:32.020516527Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:32.020523120Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:32.020529449Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:33.022088158Z" level=info msg="NetworkStart: stopping network for sandbox d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:33.022245311Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/0894d3e7-4484-4bdd-a0e2-53677ab690ca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:33.022271182Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:33.022277926Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:33.022283847Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:35.797006003Z" level=info msg="NetworkStart: stopping network for sandbox 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:35.797146849Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/0d3f7057-a8a1-4dee-a50b-c5b8abe0238e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:35.797169512Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:35.797175880Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:35.797181780Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:38.997085 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:40:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:38.997713 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:40:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:40.022513496Z" level=info msg="NetworkStart: stopping network for sandbox d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:40.022653753Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/75a7a8eb-1748-4386-b9e9-a7d0913e8713 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:40.022674854Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:40.022681510Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:40.022687395Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829777473Z" level=info msg="NetworkStart: stopping network for sandbox 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829871690Z" level=info msg="NetworkStart: stopping network for sandbox 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829926826Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/76e97bac-4627-47b1-a884-3929006ec158 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829950609Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829957159Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.829962879Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830001653Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/4e57aefd-6c77-40e3-9c52-50d372002b84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830026045Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830033930Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830040660Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830831608Z" level=info msg="NetworkStart: stopping network for sandbox 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830942514Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b6005f8b-3a87-4bc4-93be-e6c18a7405ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830966484Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830973188Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.830979657Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.832145421Z" level=info msg="NetworkStart: stopping network for sandbox f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.832283716Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/142e815d-7bc9-4c60-b4e8-8b1a6bc5bddd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.832307808Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.832315326Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.832323145Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.835592267Z" level=info msg="NetworkStart: stopping network for sandbox ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.835734100Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/959920ef-11e4-4296-8340-80a6fd571b1e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.835758684Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.835766873Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:45.835773580Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:40:49.996654 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:40:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:40:49.997174 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:40:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:51.021053738Z" level=info msg="NetworkStart: stopping network for sandbox a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:40:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:51.021200456Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/e37897bb-7c32-410b-b2c4-b236c2dded80 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:40:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:51.021229750Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:40:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:51.021237987Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:40:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:51.021243833Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:40:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:40:58.142190627Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.032769546Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.033028572Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount has successfully entered the 'dead' state. Jan 23 17:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount has successfully entered the 'dead' state. Jan 23 17:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8ba8adc7\x2d5066\x2d49ed\x2d8112\x2d096600d17a78.mount has successfully entered the 'dead' state. Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.072303699Z" level=info msg="runSandbox: deleting pod ID 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5 from idIndex" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.072326392Z" level=info msg="runSandbox: removing pod sandbox 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.072340679Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.072354520Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.088438504Z" level=info msg="runSandbox: removing pod sandbox from storage: 2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.091461023Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:01.091478455Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b0b7c42d-ee08-42fe-9a9a-0116cade4da0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:01.091610 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:01.091656 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:01.091681 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:01.091733 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(2aefa28a6275762544c971f985442b1c52f5d44d9d5f347fad24f076788ad8f5): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:01.996579 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:41:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:01.997092 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.033459702Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.033495849Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount has successfully entered the 'dead' state. Jan 23 17:41:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount has successfully entered the 'dead' state. Jan 23 17:41:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e14a78ac\x2d7e32\x2d433a\x2d9418\x2d5c5dc447ab02.mount has successfully entered the 'dead' state. Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.070300965Z" level=info msg="runSandbox: deleting pod ID 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7 from idIndex" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.070326920Z" level=info msg="runSandbox: removing pod sandbox 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.070340277Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.070352412Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.087431774Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.090984248Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:02.091002390Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=4044c909-968d-49f3-b5cd-3ce9f3f384e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:02.091217 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:02.091253 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:02.091287 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:02.091327 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(2b7da808349fe59d437591238ad1a86347a69f6ed242df1ab677fdfdbed63cf7): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.040172617Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.040224434Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.040768899Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.040798041Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount has successfully entered the 'dead' state. Jan 23 17:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount has successfully entered the 'dead' state. Jan 23 17:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount has successfully entered the 'dead' state. Jan 23 17:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount has successfully entered the 'dead' state. Jan 23 17:41:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f5c195a1\x2d00bc\x2d455a\x2db80f\x2df7d5fa36f46d.mount has successfully entered the 'dead' state. Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081278960Z" level=info msg="runSandbox: deleting pod ID 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91 from idIndex" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081303630Z" level=info msg="runSandbox: removing pod sandbox 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081316465Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081329226Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081364625Z" level=info msg="runSandbox: deleting pod ID 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc from idIndex" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081387968Z" level=info msg="runSandbox: removing pod sandbox 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081400186Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.081411062Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.093435550Z" level=info msg="runSandbox: removing pod sandbox from storage: 4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.096978159Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.096996928Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=6c4790f5-b097-4372-969a-75c58536b790 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.097237 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.097280 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.097302 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.097363 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.101486413Z" level=info msg="runSandbox: removing pod sandbox from storage: 1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.104618983Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:03.104636107Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e7997efa-703b-4693-9dec-c341eec281c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.104831 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.104861 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.104880 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:41:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:03.104915 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:41:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a010c45d\x2db982\x2d4d2c\x2d9b8d\x2db1e3eec49007.mount has successfully entered the 'dead' state. Jan 23 17:41:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1de63e79ee46b862fdfc509e1b52d936f3f9739fa893b92503f922e9d2e51d91-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4021fa4aadc89f89a401cccc504b378d4aba8fbf16f9f48be0d8a9fb89dc48fc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.788935750Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.788978580Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount has successfully entered the 'dead' state. Jan 23 17:41:09 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount has successfully entered the 'dead' state. Jan 23 17:41:09 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7555a51\x2d2474\x2d4170\x2dae24\x2d15573fba7b02.mount has successfully entered the 'dead' state. Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.840320023Z" level=info msg="runSandbox: deleting pod ID 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7 from idIndex" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.840348504Z" level=info msg="runSandbox: removing pod sandbox 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.840364546Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.840377515Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.856489261Z" level=info msg="runSandbox: removing pod sandbox from storage: 9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.859894985Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.859912115Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=afa32bd0-b46e-4453-929e-17e6f700f29b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:09.860136 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:09.860178 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:09.860202 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:09.860255 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9109c29d2aad5167bcd049b216979a626e356ef70bfa17828c7b5ba50855aef7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:41:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:09.926752 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.927079697Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.927110664Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.940939195Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/cff8284b-1b4e-4987-80b0-9ac1cfadd49e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:09.940969376Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:12.995836 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:12.996142953Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:12.996188324Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.007128387Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/d8eaac1f-f9fc-4e0d-be24-821d74c14696 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.007149727Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.031080637Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.031111573Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount has successfully entered the 'dead' state. Jan 23 17:41:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount has successfully entered the 'dead' state. Jan 23 17:41:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ca2f5ed4\x2d02ee\x2d46cc\x2db402\x2dbf70522b17ed.mount has successfully entered the 'dead' state. Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.077308311Z" level=info msg="runSandbox: deleting pod ID 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c from idIndex" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.077333639Z" level=info msg="runSandbox: removing pod sandbox 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.077345809Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.077357681Z" level=info msg="runSandbox: unmounting shmPath for sandbox 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.093442402Z" level=info msg="runSandbox: removing pod sandbox from storage: 127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.096212362Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:13.096230691Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=4a43467d-a91f-4a18-88d5-da817300a79e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:13.096439 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:13.096478 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:41:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:13.096498 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:41:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:13.096539 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:41:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-127d60e2e2927bb2c18c44bb2fd39dad0c9071bcb3b7b7b21c2f0cc0122d6f6c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.037826610Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.037867988Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount has successfully entered the 'dead' state. Jan 23 17:41:15 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount has successfully entered the 'dead' state. Jan 23 17:41:15 hub-master-0.workload.bos2.lab systemd[1]: run-netns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-63b624b2\x2d2641\x2d4f76\x2db02f\x2d43e908d52ba9.mount has successfully entered the 'dead' state. Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.080306768Z" level=info msg="runSandbox: deleting pod ID 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f from idIndex" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.080335410Z" level=info msg="runSandbox: removing pod sandbox 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.080352227Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.080368597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.096474489Z" level=info msg="runSandbox: removing pod sandbox from storage: 96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.100136926Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.100156485Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=3f799d5f-1c3f-45ad-97b5-b3e3bcd79951 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:15.100310 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:15.100472 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:15.100496 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:15.100547 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(96921728675ebdd49b9306fa4b1fb52f364a45096d2455243aa404688494026f): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:41:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:15.996437 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.996791561Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:15.996834271Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.007981492Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/050a8bfd-e43c-4159-8e21-74df5981d681 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.008000451Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.031976313Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.032005735Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount has successfully entered the 'dead' state. Jan 23 17:41:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount has successfully entered the 'dead' state. Jan 23 17:41:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b9367f39\x2d7d0f\x2d4835\x2d849a\x2d7310097d5c50.mount has successfully entered the 'dead' state. Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.082305705Z" level=info msg="runSandbox: deleting pod ID 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5 from idIndex" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.082328608Z" level=info msg="runSandbox: removing pod sandbox 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.082344039Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.082355892Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.098416045Z" level=info msg="runSandbox: removing pod sandbox from storage: 2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.101197594Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.101219886Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=ff4bd2d1-dff1-4765-8352-1db026fb8753 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:16.101547 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:16.101582 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:16.101603 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:16.101647 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(2f9896b7a114e4fd083bfbd4f54d76a3d3ca53bd014205e2fd34c9ee7cfaa1a5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:16.996072 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:16.996377 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.996407104Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.996437080Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:16.996554 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.996767226Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:16.996794189Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:16.997038 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.010915317Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f93b80dd-1fd3-4448-be11-ba53356451be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.010933590Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.011922207Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d19349e4-4dd5-4606-b254-e1ef6d5d0d52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.011939950Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.030774247Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.030810147Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount has successfully entered the 'dead' state. Jan 23 17:41:17 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount has successfully entered the 'dead' state. Jan 23 17:41:17 hub-master-0.workload.bos2.lab systemd[1]: run-netns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-36e5daea\x2d5e52\x2d4738\x2da9c3\x2d3710b0889596.mount has successfully entered the 'dead' state. Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.084328936Z" level=info msg="runSandbox: deleting pod ID fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32 from idIndex" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.084362614Z" level=info msg="runSandbox: removing pod sandbox fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.084379797Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.084395037Z" level=info msg="runSandbox: unmounting shmPath for sandbox fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.104459817Z" level=info msg="runSandbox: removing pod sandbox from storage: fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.107234616Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:17.107254307Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=7e82a384-2bc2-4b76-a168-3509a5a87ea1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:17.107478 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:17.107517 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:41:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:17.107541 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:41:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:17.107590 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(fdea34e8eb6b0121a0cacbcd6f00689420af42e44ab257b483851b86070dea32): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.034370781Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.034404535Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount has successfully entered the 'dead' state. Jan 23 17:41:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount has successfully entered the 'dead' state. Jan 23 17:41:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0894d3e7\x2d4484\x2d4bdd\x2da0e2\x2d53677ab690ca.mount has successfully entered the 'dead' state. Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.078307405Z" level=info msg="runSandbox: deleting pod ID d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba from idIndex" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.078332298Z" level=info msg="runSandbox: removing pod sandbox d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.078345755Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.078358345Z" level=info msg="runSandbox: unmounting shmPath for sandbox d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.095450752Z" level=info msg="runSandbox: removing pod sandbox from storage: d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.098942036Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:18.098959490Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=e80e029e-e56a-4343-a4bb-02c5b656e5de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:18.099143 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:18.099185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:18.099216 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:18.099262 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(d7362242a3f2995649fe3c7128c7162db26e4d988e2673797689c6bb83dc66ba): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.807957517Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.807999095Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount has successfully entered the 'dead' state. Jan 23 17:41:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount has successfully entered the 'dead' state. Jan 23 17:41:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0d3f7057\x2da8a1\x2d4dee\x2da50b\x2dc5b8abe0238e.mount has successfully entered the 'dead' state. Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.856303734Z" level=info msg="runSandbox: deleting pod ID 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7 from idIndex" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.856331497Z" level=info msg="runSandbox: removing pod sandbox 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.856346009Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.856358424Z" level=info msg="runSandbox: unmounting shmPath for sandbox 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.876442692Z" level=info msg="runSandbox: removing pod sandbox from storage: 964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.880000094Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.880019135Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=b83cd2d2-bc3c-446b-aff4-4f0c5d1973be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:20.880193 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:20.880246 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:20.880269 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:20.880317 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(964d1b8e94cc1b277cd77395920102e2c770a982dc221423373aa7fea7add8f7): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:41:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:20.946428 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.946746442Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.946776303Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.961209489Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/32fd9301-c7cd-4530-a401-5218f9c4d439 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:20.961232726Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:23.996133 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:23.996489868Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:23.996529887Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:24.007688600Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b9a8dd02-6ea4-4e59-b741-6d8ff7c92f41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:24.007709045Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.032719258Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.032760751Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount has successfully entered the 'dead' state. Jan 23 17:41:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount has successfully entered the 'dead' state. Jan 23 17:41:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-75a7a8eb\x2d1748\x2d4386\x2db9e9\x2da7d0913e8713.mount has successfully entered the 'dead' state. Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.074306068Z" level=info msg="runSandbox: deleting pod ID d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f from idIndex" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.074333275Z" level=info msg="runSandbox: removing pod sandbox d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.074350534Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.074364507Z" level=info msg="runSandbox: unmounting shmPath for sandbox d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.094490124Z" level=info msg="runSandbox: removing pod sandbox from storage: d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.097769056Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:25.097788736Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=05345970-f6ac-42fc-bbf5-f6bae77d7cbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:25.098059 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:25.098106 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:25.098130 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:25.098179 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(d8b5bc1a0295567d770ba4fe7c1d4f6ab929251d9ea15a1376ba8af15e59833f): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908342 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908361 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908367 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908374 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908380 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908388 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.908394 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:41:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:27.996140 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:41:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:27.996506179Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:27.996557976Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.008269352Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c6f95e50-f3f1-4a97-856a-f716fb4bbb7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.008289539Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.141964296Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:41:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:28.995855 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:41:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:28.995953 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.996166783Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.996203945Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.996261305Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:28.996289724Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:29.010445245Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/797e69c8-ace1-4fe7-ae04-a3a07999bacd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:29.010465837Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:29.011665381Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9aaa9066-db96-46a1-a2d1-42432c151a67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:29.011684296Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.840961159Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.841002557Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.840967437Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.841080101Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.841044328Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.841132782Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.842981093Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.843010036Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.846348450Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.846390951Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount has successfully entered the 'dead' state. Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893313992Z" level=info msg="runSandbox: deleting pod ID 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21 from idIndex" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893342335Z" level=info msg="runSandbox: removing pod sandbox 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893358491Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893369748Z" level=info msg="runSandbox: unmounting shmPath for sandbox 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893320262Z" level=info msg="runSandbox: deleting pod ID ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0 from idIndex" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893440102Z" level=info msg="runSandbox: removing pod sandbox ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893455983Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893470697Z" level=info msg="runSandbox: unmounting shmPath for sandbox ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893319975Z" level=info msg="runSandbox: deleting pod ID 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0 from idIndex" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893509687Z" level=info msg="runSandbox: removing pod sandbox 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893524048Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.893538254Z" level=info msg="runSandbox: unmounting shmPath for sandbox 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902302840Z" level=info msg="runSandbox: deleting pod ID 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8 from idIndex" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902329854Z" level=info msg="runSandbox: removing pod sandbox 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902342340Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902354525Z" level=info msg="runSandbox: unmounting shmPath for sandbox 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902305723Z" level=info msg="runSandbox: deleting pod ID f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682 from idIndex" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902414679Z" level=info msg="runSandbox: removing pod sandbox f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902426466Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.902438715Z" level=info msg="runSandbox: unmounting shmPath for sandbox f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.910467914Z" level=info msg="runSandbox: removing pod sandbox from storage: 08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.910490861Z" level=info msg="runSandbox: removing pod sandbox from storage: 50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.910469942Z" level=info msg="runSandbox: removing pod sandbox from storage: ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.915731233Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.915766757Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=1241578c-c349-4313-8f5a-75dae3fc15ea name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.916643 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.916699 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.916722 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.916773 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.917453601Z" level=info msg="runSandbox: removing pod sandbox from storage: f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.917454078Z" level=info msg="runSandbox: removing pod sandbox from storage: 247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.920632106Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.920652442Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=74754acd-4ece-42a7-9723-1902ec69626e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.920862 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.920898 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.920922 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.920965 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.923772771Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.923794248Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=ba8efad4-73f7-441a-8e2d-24104b2004a2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.924017 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.924049 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.924069 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.924107 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.926714499Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.926731350Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=0d3ab29b-0ab4-4416-b1ec-8aa1bd4a7823 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.926921 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.926952 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.926982 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.927018 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.929638483Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.929656308Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=ee51da8c-82e7-40d8-9014-e8d076b33bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.929852 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.929903 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.929926 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:30.929973 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:30.965154 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:30.965322 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:30.965369 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:30.965501 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965381707Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965412634Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:30.965611 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965745198Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965771590Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965853997Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965881029Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965929496Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965955947Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965971460Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.965957512Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.992732219Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/90d3c8e2-305f-438f-a190-e08fb9dd381a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.992753040Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.993001307Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f528928d-7314-4bf1-a97f-21a16244de61 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.993020675Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.996215203Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/6ca83dde-2f2f-414d-a854-68d3d0381824 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.996235699Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.998492693Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/6ac72b24-fa92-4f56-9d60-4daf64f675c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.998511052Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.999005166Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f5af9487-2d75-4664-9b28-8aa24fa78009 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:30.999026439Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-959920ef\x2d11e4\x2d4296\x2d8340\x2d80a6fd571b1e.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-142e815d\x2d7bc9\x2d4c60\x2db4e8\x2d8b1a6bc5bddd.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6005f8b\x2d3a87\x2d4bc4\x2d93be\x2de6c18a7405ed.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4e57aefd\x2d6c77\x2d40e3\x2d9c52\x2d50d372002b84.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ad4d7e8fefc5cf6f1517ef17946308f64d05df57ec0cb92c9c7ff14f5f702df0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-76e97bac\x2d4627\x2d47b1\x2da884\x2d3929006ec158.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f9e848466ce12712cfc158af4c375dd555fc01184ce63fb6166ad076c262b682-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-50876b3e805c14e55567ff821a283d88007b8a04123221b6796547e084ff5d21-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-08a62bc95d8fec518c8170036b976c225d01c44ff0439ad84311ed05c2047cf0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-247f0bbc495202987cf2ac138796051ee9171b3fccb97a43351dc30ab0fc65f8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:31.995907 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:31.996333648Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:31.996387745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:31.996608 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:41:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:31.997125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:41:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:32.012126352Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/514a26ae-0cfb-4d91-b97e-e2f40e02185b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:32.012152402Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.033663890Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.033707452Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount has successfully entered the 'dead' state. Jan 23 17:41:36 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount has successfully entered the 'dead' state. Jan 23 17:41:36 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e37897bb\x2d7c32\x2d410b\x2db2c4\x2db236c2dded80.mount has successfully entered the 'dead' state. Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.065309717Z" level=info msg="runSandbox: deleting pod ID a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc from idIndex" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.065336089Z" level=info msg="runSandbox: removing pod sandbox a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.065349136Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.065363444Z" level=info msg="runSandbox: unmounting shmPath for sandbox a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.082459999Z" level=info msg="runSandbox: removing pod sandbox from storage: a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.085728201Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.085748478Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=d3a77270-1290-4838-a282-489271dbd3bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:36.085935 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:41:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:36.085980 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:41:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:36.086002 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:41:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:36.086054 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a630cfebc19c588f56380766e3dab7b95c5a3c5ab5afbbccbd0079aede8625fc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:41:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:36.996290 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.996595614Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:36.996629724Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:37.008055999Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ed8430c1-992c-4908-b51d-3727e45ef358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:37.008076096Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:45.996105 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:41:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:45.996620 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:41:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:47.996794 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:47.997146071Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:47.997187584Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:41:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:48.008193278Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/08c0fece-9466-42ab-8bfe-f4c964a5f932 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:48.008225979Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:54.955966921Z" level=info msg="NetworkStart: stopping network for sandbox de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:54.956295768Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/cff8284b-1b4e-4987-80b0-9ac1cfadd49e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:54.956321450Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:54.956328952Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:41:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:54.956336693Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.020167308Z" level=info msg="NetworkStart: stopping network for sandbox f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.020324993Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/d8eaac1f-f9fc-4e0d-be24-821d74c14696 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.020349057Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.020355496Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.020362767Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:41:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:41:58.142414061Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:41:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:41:58.996678 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:41:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:41:58.997172 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:01.021064968Z" level=info msg="NetworkStart: stopping network for sandbox a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:01.021204293Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/050a8bfd-e43c-4159-8e21-74df5981d681 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:01.021231986Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:01.021238720Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:01.021244351Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025087996Z" level=info msg="NetworkStart: stopping network for sandbox 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025100254Z" level=info msg="NetworkStart: stopping network for sandbox c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025228647Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/d19349e4-4dd5-4606-b254-e1ef6d5d0d52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025250803Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025255724Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/f93b80dd-1fd3-4448-be11-ba53356451be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025257206Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025283167Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025291875Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025297965Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:02.025286928Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:05.974463605Z" level=info msg="NetworkStart: stopping network for sandbox 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:05.974606762Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/32fd9301-c7cd-4530-a401-5218f9c4d439 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:05.974632713Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:05.974638892Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:05.974644597Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:09.020292068Z" level=info msg="NetworkStart: stopping network for sandbox 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:09.020441770Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b9a8dd02-6ea4-4e59-b741-6d8ff7c92f41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:09.020465405Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:09.020472080Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:09.020478555Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:12.996931 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:42:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:12.997588 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:13.021580053Z" level=info msg="NetworkStart: stopping network for sandbox 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:13.021739746Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/c6f95e50-f3f1-4a97-856a-f716fb4bbb7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:13.021767581Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:13.021775090Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:13.021784576Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.024110083Z" level=info msg="NetworkStart: stopping network for sandbox 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.024275942Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/9aaa9066-db96-46a1-a2d1-42432c151a67 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.024878334Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.024901295Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.024916303Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.026950613Z" level=info msg="NetworkStart: stopping network for sandbox 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.027967449Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/797e69c8-ace1-4fe7-ae04-a3a07999bacd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.028000232Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.028007040Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:14.028013559Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007330010Z" level=info msg="NetworkStart: stopping network for sandbox 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007442320Z" level=info msg="NetworkStart: stopping network for sandbox 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007491868Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/90d3c8e2-305f-438f-a190-e08fb9dd381a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007521712Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007529803Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007537754Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007584478Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f528928d-7314-4bf1-a97f-21a16244de61 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007608025Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007614831Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.007622250Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.008642971Z" level=info msg="NetworkStart: stopping network for sandbox ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.008776349Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/6ca83dde-2f2f-414d-a854-68d3d0381824 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.008798837Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.008805558Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.008811582Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.010940535Z" level=info msg="NetworkStart: stopping network for sandbox 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.011045800Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/6ac72b24-fa92-4f56-9d60-4daf64f675c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.011064335Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.011070661Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.011076796Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.012570239Z" level=info msg="NetworkStart: stopping network for sandbox b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.012675981Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/f5af9487-2d75-4664-9b28-8aa24fa78009 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.012695338Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.012701553Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:16.012707112Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:17.024161667Z" level=info msg="NetworkStart: stopping network for sandbox a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:17.024350298Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/514a26ae-0cfb-4d91-b97e-e2f40e02185b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:17.024378215Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:17.024385797Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:17.024393094Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:22.020761167Z" level=info msg="NetworkStart: stopping network for sandbox 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:22.020916525Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/ed8430c1-992c-4908-b51d-3727e45ef358 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:22.020938554Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:22.020946568Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:22.020953441Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908666 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908688 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908695 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908703 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908712 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908719 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.908729 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:27.997622 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:42:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:27.998129 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:42:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:28.142314393Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:42:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:33.022279139Z" level=info msg="NetworkStart: stopping network for sandbox 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:33.022433195Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/08c0fece-9466-42ab-8bfe-f4c964a5f932 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:33.022459292Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:42:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:33.022466656Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:42:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:33.022472779Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:39.967623022Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:39.967669602Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount has successfully entered the 'dead' state. Jan 23 17:42:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount has successfully entered the 'dead' state. Jan 23 17:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cff8284b\x2d1b4e\x2d4987\x2d80b0\x2d9ac1cfadd49e.mount has successfully entered the 'dead' state. Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.007356869Z" level=info msg="runSandbox: deleting pod ID de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35 from idIndex" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.007388758Z" level=info msg="runSandbox: removing pod sandbox de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.007404968Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.007418792Z" level=info msg="runSandbox: unmounting shmPath for sandbox de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.020433358Z" level=info msg="runSandbox: removing pod sandbox from storage: de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.023809026Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.023830988Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=bbbb8897-da32-4660-8d0f-747d8d93c967 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:40.024086 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:40.024134 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:40.024157 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:40.024204 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(de0b0b17aea73b9d37cbc539e2732d94c1425268840de3552f016e306b220b35): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:42:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:40.092087 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.092433313Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.092471131Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.103833773Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a30f1e5e-cbab-466a-b965-c03405f74a4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:40.103859177Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:42.996949 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:42:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:42.997565 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.031691439Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.031727247Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount has successfully entered the 'dead' state. Jan 23 17:42:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount has successfully entered the 'dead' state. Jan 23 17:42:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d8eaac1f\x2df9fc\x2d4e0d\x2dbe24\x2d821d74c14696.mount has successfully entered the 'dead' state. Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.072318268Z" level=info msg="runSandbox: deleting pod ID f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121 from idIndex" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.072343402Z" level=info msg="runSandbox: removing pod sandbox f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.072359777Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.072371515Z" level=info msg="runSandbox: unmounting shmPath for sandbox f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.088451766Z" level=info msg="runSandbox: removing pod sandbox from storage: f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.091649508Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:43.091669985Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=5c7299cf-f6fd-4009-8c0e-8efedf1d092f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:43.091896 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:43.091937 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:43.091962 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:43.092006 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(f2b237eaabe343a39c5933016b555f506e8c662570a2067eb1572e21d4c5c121): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.032450820Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.032486682Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount has successfully entered the 'dead' state. Jan 23 17:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount has successfully entered the 'dead' state. Jan 23 17:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-050a8bfd\x2de43c\x2d4159\x2d8e21\x2d74df5981d681.mount has successfully entered the 'dead' state. Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.075305711Z" level=info msg="runSandbox: deleting pod ID a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848 from idIndex" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.075329043Z" level=info msg="runSandbox: removing pod sandbox a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.075342367Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.075355243Z" level=info msg="runSandbox: unmounting shmPath for sandbox a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.095430201Z" level=info msg="runSandbox: removing pod sandbox from storage: a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.098969016Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:46.098986625Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=abca4d9d-3f76-4f57-a5e9-fb58fe5efed6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:46.099195 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:46.099242 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:46.099263 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:42:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:46.099309 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a8195a00fdc3e6188c0adaae4c1be1152d62c7581769700120e00d9cb86a3848): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.036081670Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.036113600Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.037016003Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.037047478Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount has successfully entered the 'dead' state. Jan 23 17:42:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount has successfully entered the 'dead' state. Jan 23 17:42:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount has successfully entered the 'dead' state. Jan 23 17:42:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount has successfully entered the 'dead' state. Jan 23 17:42:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d19349e4\x2d4dd5\x2d4606\x2db254\x2de1ef6d5d0d52.mount has successfully entered the 'dead' state. Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.069302272Z" level=info msg="runSandbox: deleting pod ID 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361 from idIndex" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.069326643Z" level=info msg="runSandbox: removing pod sandbox 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.069340902Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.069353363Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.076298364Z" level=info msg="runSandbox: deleting pod ID c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f from idIndex" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.076322873Z" level=info msg="runSandbox: removing pod sandbox c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.076334128Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.076345423Z" level=info msg="runSandbox: unmounting shmPath for sandbox c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.088438798Z" level=info msg="runSandbox: removing pod sandbox from storage: 5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.091832491Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.091849247Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=80f0928c-417a-4dcf-8d12-2aa2864bdeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.092021 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.092074 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.092099 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.092152 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.092396770Z" level=info msg="runSandbox: removing pod sandbox from storage: c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.095681747Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:47.095700417Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=1d1e841c-bc53-401d-a0af-b34dd7415133 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.095872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.095904 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.095923 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:42:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:47.095958 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f93b80dd\x2d1fd3\x2d4448\x2dbe11\x2dba53356451be.mount has successfully entered the 'dead' state. Jan 23 17:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5abe51db3a2a9777b8876987fdbb32b3e54fe0708ff4b1c0c30ac0f772e46361-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c07db09602d601b07399f37f69f6493eb2d4ee29740058913cd6b67029ea2b4f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:50.986439577Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:50.986487815Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount has successfully entered the 'dead' state. Jan 23 17:42:50 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount has successfully entered the 'dead' state. Jan 23 17:42:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-32fd9301\x2dc7cd\x2d4530\x2da401\x2d5218f9c4d439.mount has successfully entered the 'dead' state. Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.029278675Z" level=info msg="runSandbox: deleting pod ID 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639 from idIndex" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.029303696Z" level=info msg="runSandbox: removing pod sandbox 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.029319989Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.029332931Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.045432893Z" level=info msg="runSandbox: removing pod sandbox from storage: 1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.048983407Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.049000546Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=da7fbe47-326c-4109-9d2d-2e0d53eaa90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:51.049237 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:51.049281 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:51.049303 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:51.049355 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1704132bc1b3cda61ba4baf4ff5b76b9c1bbf1a51f493197130158ba3812e639): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:42:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:51.113130 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.113437501Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.113470445Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.124086975Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/71caf1f0-83ab-4997-b28a-ac27e9e520f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:51.124106439Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.031590124Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.031623635Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount has successfully entered the 'dead' state. Jan 23 17:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount has successfully entered the 'dead' state. Jan 23 17:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b9a8dd02\x2d6ea4\x2d4e59\x2db741\x2d6d8ff7c92f41.mount has successfully entered the 'dead' state. Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.079281685Z" level=info msg="runSandbox: deleting pod ID 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99 from idIndex" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.079308824Z" level=info msg="runSandbox: removing pod sandbox 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.079321972Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.079333753Z" level=info msg="runSandbox: unmounting shmPath for sandbox 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.095460636Z" level=info msg="runSandbox: removing pod sandbox from storage: 17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.098372730Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:54.098597886Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=211b50f9-70cf-4283-9e8f-d7cbe76df2ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:54.098826 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:54.098869 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:54.098892 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:42:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:54.098949 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(17b3855724bb65f805240943dab180205fbeb7ddb00d5383c256beff78a28c99): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:42:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:55.997213 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:42:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:55.997715 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:42:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:57.996753 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:57.997051769Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:57.997095467Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.009017932Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/e08582f4-7311-4534-876b-72b0b8ef2aea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.009040715Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.032807155Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.032835756Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount has successfully entered the 'dead' state. Jan 23 17:42:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount has successfully entered the 'dead' state. Jan 23 17:42:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c6f95e50\x2df3f1\x2d4a97\x2d856a\x2df716fb4bbb7f.mount has successfully entered the 'dead' state. Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.074403129Z" level=info msg="runSandbox: deleting pod ID 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36 from idIndex" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.074426750Z" level=info msg="runSandbox: removing pod sandbox 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.074439389Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.074450970Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.086436280Z" level=info msg="runSandbox: removing pod sandbox from storage: 4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.089239167Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.089259302Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=5afdd75d-7cba-441a-9b4b-9108fd779539 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:58.089437 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:58.089480 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:58.089505 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:58.089560 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.142360128Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:42:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:58.995498 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.995812873Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:58.995851154Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.006271462Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/1ae51cde-5458-4cc1-a4db-e5c73d011596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.006296296Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:42:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4ad779ad4812558a27c3c3ba33c64d5a705e3cb735e016ef39a27bf00d3b1f36-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.035982070Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.036012762Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.039151250Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.039188242Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount has successfully entered the 'dead' state. Jan 23 17:42:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount has successfully entered the 'dead' state. Jan 23 17:42:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount has successfully entered the 'dead' state. Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.090305113Z" level=info msg="runSandbox: deleting pod ID 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f from idIndex" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.090331465Z" level=info msg="runSandbox: removing pod sandbox 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.090343611Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.090355708Z" level=info msg="runSandbox: unmounting shmPath for sandbox 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.098287352Z" level=info msg="runSandbox: deleting pod ID 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab from idIndex" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.098316688Z" level=info msg="runSandbox: removing pod sandbox 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.098330907Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.098345104Z" level=info msg="runSandbox: unmounting shmPath for sandbox 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.106428808Z" level=info msg="runSandbox: removing pod sandbox from storage: 460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.109133054Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.109151124Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=56daa5ec-6122-4fb0-80ef-9f08233414ee name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.109291 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.109335 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.109357 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.109404 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.115512906Z" level=info msg="runSandbox: removing pod sandbox from storage: 198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.118858758Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.118878093Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=850cecdc-c0f8-4de1-8658-34452e153477 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.119059 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.119090 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.119110 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:42:59.119147 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:42:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:42:59.996297 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.996744081Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:42:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:42:59.996793341Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9aaa9066\x2ddb96\x2d46a1\x2da2d1\x2d42432c151a67.mount has successfully entered the 'dead' state. Jan 23 17:43:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount has successfully entered the 'dead' state. Jan 23 17:43:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-797e69c8\x2dace1\x2d4fe7\x2dae04\x2da3a07999bacd.mount has successfully entered the 'dead' state. Jan 23 17:43:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-460d7a40f89aa1b9f4b5c5e4806d39346aeb9dc2d4cd57e29365aa5445794a0f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-198d16c6822632ca6929c0796fc4fc73bd8882541405125878506b003d3d37ab-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:00.011812396Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/507cc1d7-b9e0-4db0-88c3-86ddc9a5c6bd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:00.011840592Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:00.995490 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:00.995890126Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:00.995944405Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.006674649Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/88cb89c5-82de-4c2c-a375-3241f848d399 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.006700375Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.018718379Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.018757001Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.019933968Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.019969786Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.020268568Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.020306641Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.022003339Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.022032093Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.022927157Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.022956580Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount has successfully entered the 'dead' state. Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074313165Z" level=info msg="runSandbox: deleting pod ID 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6 from idIndex" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074341907Z" level=info msg="runSandbox: removing pod sandbox 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074357036Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074369747Z" level=info msg="runSandbox: unmounting shmPath for sandbox 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074455397Z" level=info msg="runSandbox: deleting pod ID ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b from idIndex" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074482260Z" level=info msg="runSandbox: removing pod sandbox ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074497259Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.074509916Z" level=info msg="runSandbox: unmounting shmPath for sandbox ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.077298444Z" level=info msg="runSandbox: deleting pod ID 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688 from idIndex" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.077325681Z" level=info msg="runSandbox: removing pod sandbox 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.077339371Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.077352029Z" level=info msg="runSandbox: unmounting shmPath for sandbox 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078296410Z" level=info msg="runSandbox: deleting pod ID b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92 from idIndex" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078325103Z" level=info msg="runSandbox: removing pod sandbox b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078337530Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078349475Z" level=info msg="runSandbox: unmounting shmPath for sandbox b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078300183Z" level=info msg="runSandbox: deleting pod ID 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37 from idIndex" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078419307Z" level=info msg="runSandbox: removing pod sandbox 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078431252Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.078445123Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.083482003Z" level=info msg="runSandbox: removing pod sandbox from storage: ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.086719808Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.086739132Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=d088358b-f7f0-4266-84d8-addaf377a35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.086963 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.087007 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.087030 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.087076 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.091547429Z" level=info msg="runSandbox: removing pod sandbox from storage: 62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.094838748Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.094857905Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=b63338aa-7c96-44b0-b86c-91935d39e10b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.095083 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.095116 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.095137 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.095178 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.095451435Z" level=info msg="runSandbox: removing pod sandbox from storage: 0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.095472711Z" level=info msg="runSandbox: removing pod sandbox from storage: 823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.095527814Z" level=info msg="runSandbox: removing pod sandbox from storage: b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.098723776Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.098743196Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f8a9b7b2-bc0d-499a-bb02-f52347ec13d6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.098982 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.099018 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.099041 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.099078 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.101775405Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.101794076Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=8925763c-9c99-45cb-9ad9-a345fa479b30 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.102035 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.102068 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.102089 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.102125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.104805086Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.104822559Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=299ab405-87b6-4015-a89e-a7539e4fc710 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.105044 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.105074 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.105095 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:01.105130 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:01.132712 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:01.132811 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:01.132958 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:01.133074 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133081514Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133113533Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:01.133126 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133199099Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133237408Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133328613Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133346762Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133360487Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133365632Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133328785Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.133433559Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.161179136Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f9636998-713b-4678-9618-bcd3b78e0c9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.161201527Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.161843156Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/123c584d-65f5-4032-883c-2a6f34da1a4c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.161861603Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.162965444Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9a66ddaf-e8c6-4ab9-b370-2c9ed0cdccd1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.162984277Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.166086593Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c702f878-3b9c-44ac-8b88-a9518c79867a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.166109127Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.166911693Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/bb2bcfcb-53ad-45aa-a073-c1f60a53a163 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:01.166935032Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f5af9487\x2d2d75\x2d4664\x2d9b28\x2d8aa24fa78009.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6ac72b24\x2dfa92\x2d4f56\x2d9d60\x2d4daf64f675c1.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6ca83dde\x2d2f2f\x2d414d\x2da854\x2d68d3d0381824.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f528928d\x2d7314\x2d4bf1\x2da97f\x2d21a16244de61.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-90d3c8e2\x2d305f\x2d438f\x2da190\x2de08fb9dd381a.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-62a156c18cc290c231a35b9c3876867abf48d5831810f009b2b38424161d94b6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b41b07577da356c9cfcdeadb29c33b6072c2196fe9c324b3298bc47529e17d92-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ee7909d60c6c22394cac407cb050d510c9b84844e543649330c4ea827596fd7b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-823bb9316357a637c0c61c331a3434681ba26a5856c9cc4dba9ee189c6a75688-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0403797b692e1a20fc4f0bb804d219549763f811887ec63bbd6f50fff3365b37-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.036318313Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.036372663Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-514a26ae\x2d0cfb\x2d4d91\x2db97e\x2de2f40e02185b.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.094341281Z" level=info msg="runSandbox: deleting pod ID a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53 from idIndex" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.094372113Z" level=info msg="runSandbox: removing pod sandbox a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.094399584Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.094414703Z" level=info msg="runSandbox: unmounting shmPath for sandbox a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.106446110Z" level=info msg="runSandbox: removing pod sandbox from storage: a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.109370957Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:02.109390604Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=280211ab-f663-43c7-8268-56fa98b63779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:02.109616 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:02.109782 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:02.109808 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:02.109858 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(a0639e06de380aa70f04d0945c37f15102a7bdd9964d2486d762df49f33a4a53): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.032150093Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.032189622Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount has successfully entered the 'dead' state. Jan 23 17:43:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount has successfully entered the 'dead' state. Jan 23 17:43:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ed8430c1\x2d992c\x2d4908\x2db51d\x2d3727e45ef358.mount has successfully entered the 'dead' state. Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.083305755Z" level=info msg="runSandbox: deleting pod ID 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445 from idIndex" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.083330660Z" level=info msg="runSandbox: removing pod sandbox 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.083346655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.083361392Z" level=info msg="runSandbox: unmounting shmPath for sandbox 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.103422204Z" level=info msg="runSandbox: removing pod sandbox from storage: 20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.107147233Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:07.107164139Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=269a98b3-2604-4eea-a171-95e3c7a04137 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:07.107429 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:07.107478 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:07.107500 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:07.107548 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(20bb0d342e207daca84e63aec53977a411763f62caf416c0e6d2d0081005f445): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1216] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1222] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1222] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1224] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1228] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:43:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495788.1232] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:43:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:08.996010 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:43:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:08.996346341Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:08.996385921Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:09.007620235Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8f73cb56-afd0-4fea-82d1-5960e4958579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:09.007827020Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:09 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495789.3362] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:43:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:09.996931 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:43:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:09.997438 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:43:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:10.996208 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:43:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:10.996319 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:43:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:10.996450 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996570076Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996613364Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996699225Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996743755Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996757687Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:10.996790950Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.019017709Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a1e09af0-018e-4d67-a32f-4ca72827e4d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.019043051Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.019940484Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/097ef477-f274-4e3d-947d-b5e5f20217a3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.019963551Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.021304762Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/38187a09-905a-4914-90b5-a9cd92efa987 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:11.021325851Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:13.996143 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:13.996523130Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:13.996575254Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:14.008146083Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/227e2c4e-6506-4f78-ab81-4dd049062d9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:14.008168474Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.032846762Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.032885598Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount has successfully entered the 'dead' state. Jan 23 17:43:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount has successfully entered the 'dead' state. Jan 23 17:43:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-08c0fece\x2d9466\x2d42ab\x2d8bfe\x2df4c964a5f932.mount has successfully entered the 'dead' state. Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.079306840Z" level=info msg="runSandbox: deleting pod ID 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6 from idIndex" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.079333917Z" level=info msg="runSandbox: removing pod sandbox 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.079346739Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.079359869Z" level=info msg="runSandbox: unmounting shmPath for sandbox 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.094414348Z" level=info msg="runSandbox: removing pod sandbox from storage: 23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.097757573Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:18.097776411Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=3d05617b-a1d4-43fe-95a6-53899e0c1223 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:18.097999 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:43:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:18.098049 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:43:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:18.098075 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:43:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:18.098124 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(23ed733070e89b16d17bbf50e622a507aa9d36f844f7647d0ef00a3e24262ad6): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:43:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:20.996294 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:43:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:20.996640687Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:20.996679056Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:21.008777176Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/7cfbad74-c846-4eeb-95ef-dbd13e3926d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:21.008798506Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:23.996689 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:43:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:23.997329 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:43:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:25.118829753Z" level=info msg="NetworkStart: stopping network for sandbox 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:25.118969945Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a30f1e5e-cbab-466a-b965-c03405f74a4a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:25.118993395Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:25.119000174Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:25.119006240Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909006 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909026 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909034 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909041 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909050 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909057 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:27.909063 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:43:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:28.141347824Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:43:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:29.996380 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:43:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:29.996777164Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:29.996824290Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:30.011397026Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2bd36f3f-e59e-4742-b95d-0f42539b57ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:30.011425862Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:34.996195 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.996952196Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=dd364390-e7bb-4863-8717-9bc6a6198470 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.997342565Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dd364390-e7bb-4863-8717-9bc6a6198470 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.997827559Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=5a4e4442-dfe8-4186-9d61-a103e3e47c13 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.997923312Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5a4e4442-dfe8-4186-9d61-a103e3e47c13 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.998896198Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=976aee29-7b59-4b7a-9b7d-49f8a640c4c2 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:43:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:34.998964984Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope. -- Subject: Unit crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e. -- Subject: Unit crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.122026966Z" level=info msg="Created container 4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=976aee29-7b59-4b7a-9b7d-49f8a640c4c2 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.122563130Z" level=info msg="Starting container: 4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" id=d60ac50c-b636-4141-8d19-d40c6f763cc6 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.141448146Z" level=info msg="Started container" PID=173418 containerID=4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=d60ac50c-b636-4141-8d19-d40c6f763cc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.146943611Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.156880901Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.156903958Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.156917731Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.165905255Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.165925301Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.165938499Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.174793498Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.174812297Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.174821713Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.183285892Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.183307160Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.183319319Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.191561343Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:43:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:35.191578556Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:43:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:35.198980 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/195.log" Jan 23 17:43:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:35.199811 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e} Jan 23 17:43:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:35.199963 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:43:35 hub-master-0.workload.bos2.lab conmon[173397]: conmon 4ef176b949aa2a9d0d30 : container 173418 exited with status 1 Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has successfully entered the 'dead' state. Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope: Consumed 555ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope completed and consumed the indicated resources. Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope has successfully entered the 'dead' state. Jan 23 17:43:35 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope: Consumed 52ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e.scope completed and consumed the indicated resources. Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.138286522Z" level=info msg="NetworkStart: stopping network for sandbox b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.138424936Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/71caf1f0-83ab-4997-b28a-ac27e9e520f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.138445878Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.138452194Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.138458946Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.203787 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/196.log" Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.204318 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/195.log" Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.205287 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" exitCode=1 Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.205308 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e} Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.205327 8631 scope.go:115] "RemoveContainer" containerID="d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:36.206228 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.206286936Z" level=info msg="Removing container: d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927" id=23b94d07-b795-4a53-8570-f4665864d929 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:43:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:36.206745 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:43:36 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-3cf68ad4966f320a5e2d98612bb960b1a4895b86a354410636170d9f5e275319-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-3cf68ad4966f320a5e2d98612bb960b1a4895b86a354410636170d9f5e275319-merged.mount has successfully entered the 'dead' state. Jan 23 17:43:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:36.234154651Z" level=info msg="Removed container d4e19a3827626f411cd7bc813e897f7a722f2fbf3dd8c8d87c9c9e79dbe03927: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=23b94d07-b795-4a53-8570-f4665864d929 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:43:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:37.208733 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/196.log" Jan 23 17:43:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:37.210826 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:43:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:37.211351 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:43:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:43.022195228Z" level=info msg="NetworkStart: stopping network for sandbox 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:43.022437174Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/e08582f4-7311-4534-876b-72b0b8ef2aea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:43.022461910Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:43.022469989Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:43.022477866Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:44.021067453Z" level=info msg="NetworkStart: stopping network for sandbox 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:44.021265027Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/1ae51cde-5458-4cc1-a4db-e5c73d011596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:44.021291209Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:44.021299057Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:44.021306522Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:45.025637627Z" level=info msg="NetworkStart: stopping network for sandbox 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:45.025774439Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/507cc1d7-b9e0-4db0-88c3-86ddc9a5c6bd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:45.025797752Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:45.025804541Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:45.025811258Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.020354390Z" level=info msg="NetworkStart: stopping network for sandbox ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.020496895Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/88cb89c5-82de-4c2c-a375-3241f848d399 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.020519202Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.020526231Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.020533121Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175258269Z" level=info msg="NetworkStart: stopping network for sandbox a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175282971Z" level=info msg="NetworkStart: stopping network for sandbox d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175404259Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/f9636998-713b-4678-9618-bcd3b78e0c9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175405829Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/123c584d-65f5-4032-883c-2a6f34da1a4c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175447470Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175454235Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175460735Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175428807Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175504202Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.175510471Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.176713684Z" level=info msg="NetworkStart: stopping network for sandbox 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.176820892Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9a66ddaf-e8c6-4ab9-b370-2c9ed0cdccd1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.176840025Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.176846321Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.176852148Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.178960243Z" level=info msg="NetworkStart: stopping network for sandbox be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179090463Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/c702f878-3b9c-44ac-8b88-a9518c79867a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179116402Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179125737Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179132710Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179725993Z" level=info msg="NetworkStart: stopping network for sandbox 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179829838Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/bb2bcfcb-53ad-45aa-a073-c1f60a53a163 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179851011Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179857433Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:46.179864008Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:43:49.996769 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:43:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:43:49.997463 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:43:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:54.022222496Z" level=info msg="NetworkStart: stopping network for sandbox ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:54.022368719Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/8f73cb56-afd0-4fea-82d1-5960e4958579 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:54.022391109Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:54.022397678Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:54.022403980Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034055811Z" level=info msg="NetworkStart: stopping network for sandbox c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034236575Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/a1e09af0-018e-4d67-a32f-4ca72827e4d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034264580Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034272674Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034281069Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034583233Z" level=info msg="NetworkStart: stopping network for sandbox 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034723022Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/38187a09-905a-4914-90b5-a9cd92efa987 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034748399Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034756965Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034764951Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034593611Z" level=info msg="NetworkStart: stopping network for sandbox 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034905705Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/097ef477-f274-4e3d-947d-b5e5f20217a3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034928997Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034935393Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:56.034942018Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:43:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:58.143573262Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:43:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:59.021254886Z" level=info msg="NetworkStart: stopping network for sandbox 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:43:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:59.021430587Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/227e2c4e-6506-4f78-ab81-4dd049062d9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:43:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:59.021453655Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:43:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:59.021460473Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:43:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:43:59.021466930Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:01.996386 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:44:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:01.996955 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:44:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:06.021886805Z" level=info msg="NetworkStart: stopping network for sandbox e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:06.022021163Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/7cfbad74-c846-4eeb-95ef-dbd13e3926d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:06.022046151Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:44:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:06.022052419Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:44:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:06.022059109Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.131249748Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.131293860Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount has successfully entered the 'dead' state. Jan 23 17:44:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount has successfully entered the 'dead' state. Jan 23 17:44:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a30f1e5e\x2dcbab\x2d466a\x2db965\x2dc03405f74a4a.mount has successfully entered the 'dead' state. Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.170369394Z" level=info msg="runSandbox: deleting pod ID 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097 from idIndex" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.170398803Z" level=info msg="runSandbox: removing pod sandbox 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.170418903Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.170430205Z" level=info msg="runSandbox: unmounting shmPath for sandbox 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.190456042Z" level=info msg="runSandbox: removing pod sandbox from storage: 03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.194085850Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.194105131Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=525591af-150c-44a0-be03-2b6863a161b2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:10.194301 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:10.194349 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:10.194376 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:10.194431 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(03b82166e3505b662944136fdb122df15bf55691df89e2e5a6dbdd5d6a3e6097): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:44:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:10.265489 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.265704145Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.265739056Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.277399101Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/6346b5e3-45c0-473c-b774-9d7ab2250a7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:10.277419151Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:15.025787147Z" level=info msg="NetworkStart: stopping network for sandbox bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:15.025932977Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/2bd36f3f-e59e-4742-b95d-0f42539b57ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:15.025957539Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:44:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:15.025964062Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:44:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:15.025970488Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:16.996820 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:44:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:16.997480 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.150654675Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.150693789Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount has successfully entered the 'dead' state. Jan 23 17:44:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount has successfully entered the 'dead' state. Jan 23 17:44:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-71caf1f0\x2d83ab\x2d4997\x2db28a\x2dac27e9e520f3.mount has successfully entered the 'dead' state. Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.199347672Z" level=info msg="runSandbox: deleting pod ID b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200 from idIndex" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.199378396Z" level=info msg="runSandbox: removing pod sandbox b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.199398447Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.199411181Z" level=info msg="runSandbox: unmounting shmPath for sandbox b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.212428682Z" level=info msg="runSandbox: removing pod sandbox from storage: b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.215452400Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.215470883Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=7f243467-e151-41d2-a160-fc3ef2f1c19b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:21.215715 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:21.215767 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:21.215791 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:21.215846 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:44:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:21.285689 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.285976464Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.286010248Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.296448873Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/70c6a60d-28e8-4293-b70b-9e528e200960 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:21.296468840Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b8406205d88e0223ea8461ed5e8dd068c13bd1d7a8a82737483a6612ddcd9200-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909739 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909759 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909766 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909774 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909785 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909793 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.909798 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:27.997427 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:44:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:27.997931 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.032213147Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.032261151Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount has successfully entered the 'dead' state. Jan 23 17:44:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount has successfully entered the 'dead' state. Jan 23 17:44:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e08582f4\x2d7311\x2d4534\x2d876b\x2d72b0b8ef2aea.mount has successfully entered the 'dead' state. Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.081311195Z" level=info msg="runSandbox: deleting pod ID 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2 from idIndex" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.081335710Z" level=info msg="runSandbox: removing pod sandbox 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.081350233Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.081363838Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.097464142Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.100788391Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.100806315Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=67d6fe39-2345-4845-ab1b-f6fb126b664f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:28.101026 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:28.101066 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:28.101090 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:28.101135 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(9b1077bb3b39c24f1ee5baafce42b3870710623db4a9146154ebd64255ad51d2): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:44:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:28.141417522Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.033661039Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.033697067Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount has successfully entered the 'dead' state. Jan 23 17:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount has successfully entered the 'dead' state. Jan 23 17:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1ae51cde\x2d5458\x2d4cc1\x2da4db\x2de5c73d011596.mount has successfully entered the 'dead' state. Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.081302363Z" level=info msg="runSandbox: deleting pod ID 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93 from idIndex" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.081327264Z" level=info msg="runSandbox: removing pod sandbox 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.081340203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.081353383Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.101484830Z" level=info msg="runSandbox: removing pod sandbox from storage: 9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.105075507Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:29.105092949Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=47be6401-0019-4591-ab15-e96f1728fb2d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:29.105310 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:29.105353 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:29.105373 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:44:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:29.105414 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c96d00a54611f5c4b60f61e55fabe195b9fe4775b2a5350bf06af2384c49f93): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.037140407Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.037174197Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount has successfully entered the 'dead' state. Jan 23 17:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount has successfully entered the 'dead' state. Jan 23 17:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-netns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-507cc1d7\x2db9e0\x2d4db0\x2d88c3\x2d86ddc9a5c6bd.mount has successfully entered the 'dead' state. Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.080307770Z" level=info msg="runSandbox: deleting pod ID 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547 from idIndex" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.080332701Z" level=info msg="runSandbox: removing pod sandbox 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.080345706Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.080357098Z" level=info msg="runSandbox: unmounting shmPath for sandbox 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.096470423Z" level=info msg="runSandbox: removing pod sandbox from storage: 69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.099854007Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:30.099872720Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=c1dee320-59e9-403a-9b62-fe84cb1629f4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:30.100098 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:30.100143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:30.100167 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:30.100221 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(69787966c5c7e7f3354058ca36b6242e08ece7282b928d497b00669c13743547): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.031377659Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.031414025Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-netns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-88cb89c5\x2d82de\x2d4c2c\x2da375\x2d3241f848d399.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.084303570Z" level=info msg="runSandbox: deleting pod ID ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a from idIndex" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.084333128Z" level=info msg="runSandbox: removing pod sandbox ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.084346681Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.084358286Z" level=info msg="runSandbox: unmounting shmPath for sandbox ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.098475603Z" level=info msg="runSandbox: removing pod sandbox from storage: ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.102022077Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.102040696Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=5143584c-f2e5-46f6-9a90-98004a08e825 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.102159 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.102198 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.102226 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.102271 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(ea713443f7d9033f7d6def679a51d0389004126d79f3f644812c82818d1f6a6a): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.187133035Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.187171702Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.187448603Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.187479022Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.188117376Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.188156224Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.188604654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.188635278Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.190067912Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.190100795Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount has successfully entered the 'dead' state. Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232326199Z" level=info msg="runSandbox: deleting pod ID a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff from idIndex" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232351334Z" level=info msg="runSandbox: deleting pod ID be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726 from idIndex" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232381565Z" level=info msg="runSandbox: removing pod sandbox be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232396409Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232407829Z" level=info msg="runSandbox: unmounting shmPath for sandbox be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232353983Z" level=info msg="runSandbox: deleting pod ID 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612 from idIndex" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232478319Z" level=info msg="runSandbox: removing pod sandbox 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232492105Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232504066Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232354412Z" level=info msg="runSandbox: deleting pod ID d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0 from idIndex" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232536437Z" level=info msg="runSandbox: removing pod sandbox d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232551063Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232566447Z" level=info msg="runSandbox: unmounting shmPath for sandbox d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232355084Z" level=info msg="runSandbox: removing pod sandbox a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232621131Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.232639891Z" level=info msg="runSandbox: unmounting shmPath for sandbox a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.240279171Z" level=info msg="runSandbox: deleting pod ID 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1 from idIndex" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.240301595Z" level=info msg="runSandbox: removing pod sandbox 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.240313992Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.240326432Z" level=info msg="runSandbox: unmounting shmPath for sandbox 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.253545595Z" level=info msg="runSandbox: removing pod sandbox from storage: be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.253545836Z" level=info msg="runSandbox: removing pod sandbox from storage: a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.253609101Z" level=info msg="runSandbox: removing pod sandbox from storage: d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.253617335Z" level=info msg="runSandbox: removing pod sandbox from storage: 3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.256950757Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.256969426Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=c8af1ca2-7f31-4bb4-93a2-2ebe5ede3e04 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.257234 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.257279 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.257304 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.257352 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.258441943Z" level=info msg="runSandbox: removing pod sandbox from storage: 731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.259974390Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.259992766Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b8331850-32b8-489f-b399-1f5d7a4411a4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.260224 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.260261 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.260282 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.260321 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.266693187Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.266718083Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2f66dd54-2057-4515-b51e-b409c61da606 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.266921 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.266952 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.266975 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.267012 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.270075381Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.270097924Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=5edfe514-9b62-4376-92cf-dd08f4e93517 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.270322 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.270355 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.270374 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.270412 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.273036866Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.273055427Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=3cc571a7-6f98-44e4-ab9e-57f7edc666fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.273224 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.273257 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.273276 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:31.273314 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:31.304885 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:31.304965 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:31.305071 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:31.305200 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:44:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:31.305230 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305220148Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305249818Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305341743Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305372606Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305474348Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305492310Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305508958Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305545124Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305477298Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.305621053Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.330718890Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ce630d16-04af-490b-bc45-79189c00749e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.330742571Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.331283715Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/9f2bf18a-c901-4ec9-a749-547aa95446b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.331305893Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.334546954Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/b97f78f2-c1b8-466a-b609-df6f74268051 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.334569390Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.335704646Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6b2963f9-572b-4ffd-92d8-d2e101755141 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.335725149Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.337128000Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/237c4715-fc7d-47d3-b70b-56db5da06ee4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:31.337149634Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bb2bcfcb\x2d53ad\x2d45aa\x2da073\x2dc1f60a53a163.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c702f878\x2d3b9c\x2d44ac\x2d8b88\x2da9518c79867a.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9a66ddaf\x2de8c6\x2d4ab9\x2db370\x2d2c9ed0cdccd1.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-123c584d\x2d65f5\x2d4032\x2d883c\x2d2a6f34da1a4c.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3724d081944daea61bd15f95d9732a380daf1401d8d911b6930fc17e5b95b612-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f9636998\x2d713b\x2d4678\x2d9618\x2dbcd3b78e0c9d.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d56698783460725e56c9ee3da20873936bde29dd6870fe81d4daa54ce2c2dff0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-be016cde64021f5b8ab858425e800ab5a1a34dd47b8d9f111f8234e2c81b2726-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-731c37e683446b727122c82f9200676c869ead7064f26207750c962a51e5c7a1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a3fe26fdc501425e7a08a2d6916a110ea3f7ce98f54719c47eaf367682d6b3ff-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1248] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1252] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1254] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1554] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1555] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1565] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1567] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1568] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1569] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1572] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:44:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495878.1576] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.032664447Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.032699704Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount has successfully entered the 'dead' state. Jan 23 17:44:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount has successfully entered the 'dead' state. Jan 23 17:44:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8f73cb56\x2dafd0\x2d4fea\x2d82d1\x2d5960e4958579.mount has successfully entered the 'dead' state. Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.070279396Z" level=info msg="runSandbox: deleting pod ID ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316 from idIndex" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.070306843Z" level=info msg="runSandbox: removing pod sandbox ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.070320648Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.070332164Z" level=info msg="runSandbox: unmounting shmPath for sandbox ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.086429406Z" level=info msg="runSandbox: removing pod sandbox from storage: ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.089756080Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:39.089775384Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=d1fa023a-a5cb-4c99-b0b4-56f7f1bd0abc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:39.089907 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:39.090092 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:44:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:39.090115 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:44:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:39.090160 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(ed1f0d4ce6a0471ef67ed03c63ee4f2ac0ed5d2ae234952ca984efc91679a316): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:44:40 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495880.2349] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.045783454Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.045831415Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.046281058Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.046330372Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.046365652Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.046400204Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098331001Z" level=info msg="runSandbox: deleting pod ID 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e from idIndex" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098361572Z" level=info msg="runSandbox: removing pod sandbox 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098375998Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098388616Z" level=info msg="runSandbox: unmounting shmPath for sandbox 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098341877Z" level=info msg="runSandbox: deleting pod ID 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8 from idIndex" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098453723Z" level=info msg="runSandbox: removing pod sandbox 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098473998Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098488728Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098342591Z" level=info msg="runSandbox: deleting pod ID c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500 from idIndex" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098606901Z" level=info msg="runSandbox: removing pod sandbox c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098620472Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.098633348Z" level=info msg="runSandbox: unmounting shmPath for sandbox c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.110442442Z" level=info msg="runSandbox: removing pod sandbox from storage: 97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.110475913Z" level=info msg="runSandbox: removing pod sandbox from storage: c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.111451547Z" level=info msg="runSandbox: removing pod sandbox from storage: 3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.114080476Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.114099410Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=553831ab-b9cf-4c34-bf1a-02627c261358 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.114259 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.114307 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.114331 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.114382 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.120743494Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.120766625Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=71262a9a-7a4d-4696-9240-d9a1dc1d27a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.120993 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.121027 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.121048 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.121092 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.123781516Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.123799345Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=4f15ddaf-ac69-4f26-b92a-c1096c8a61ab name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.124007 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.124051 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.124086 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.124130 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-38187a09\x2d905a\x2d4914\x2d90b5\x2da9cd92efa987.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-097ef477\x2df274\x2d4e3d\x2d947d\x2db5e5f20217a3.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a1e09af0\x2d018e\x2d4d67\x2da32f\x2d4ca72827e4d5.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-97ec3484634b6995346d1a014051dea4d0c880b1f9db1f9d032b328364b7075e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3b22180bb142ecd45392584bb6a1d64dbe197a2eb7c07616134c723c1162f6f8-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c25554ff8997daf8dda413a044cd916574c6be2d0d23869a1983a28edac55500-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:41.996358 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.996774888Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:41.996828488Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:41.997119 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:44:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:41.997646 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.008501035Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/45ac8ba5-6e4f-4086-aea0-83323237afdb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.008526732Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:42.996223 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:44:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:42.996382 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.996544045Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.996579740Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.996652502Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:42.996695285Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.010468855Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/85838751-231f-4ff1-8f4a-9ba5b90eb4f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.010495864Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.011946682Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e0192e74-0dcf-4494-b0ca-2a5ec665ac69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.011969631Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:43.996129 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.996448280Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:43.996489823Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.007265987Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8cd84912-e4fc-4860-ae68-7a96e3c8db9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.007285161Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.032201369Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.032234558Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount has successfully entered the 'dead' state. Jan 23 17:44:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount has successfully entered the 'dead' state. Jan 23 17:44:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-227e2c4e\x2d6506\x2d4f78\x2dab81\x2d4dd049062d9b.mount has successfully entered the 'dead' state. Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.075285936Z" level=info msg="runSandbox: deleting pod ID 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0 from idIndex" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.075311470Z" level=info msg="runSandbox: removing pod sandbox 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.075325858Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.075338124Z" level=info msg="runSandbox: unmounting shmPath for sandbox 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.099405978Z" level=info msg="runSandbox: removing pod sandbox from storage: 636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.102269828Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:44.102289321Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=58e9e87e-2d81-42fb-91b6-2d8049ddbadb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:44.102472 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:44.102514 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:44.102537 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:44.102583 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(636819bba3d9d72107a175d69d037280da4f79728d0a9643cd3057c46435cbc0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:44:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:50.995560 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:44:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:50.995913417Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:50.995969572Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.007710291Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/bee26b02-51c6-4172-842f-2ede09c63101 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.007737700Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.032883371Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.032912652Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount has successfully entered the 'dead' state. Jan 23 17:44:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount has successfully entered the 'dead' state. Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.073305411Z" level=info msg="runSandbox: deleting pod ID e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28 from idIndex" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.073329201Z" level=info msg="runSandbox: removing pod sandbox e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.073342905Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.073353608Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.098409873Z" level=info msg="runSandbox: removing pod sandbox from storage: e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.101368590Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:51.101386364Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=e2c899af-c967-4ba9-91e0-d6304c0adf37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:51.101514 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:44:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:51.101553 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:51.101576 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:51.101619 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:44:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7cfbad74\x2dc846\x2d4eeb\x2d95ef\x2ddbd13e3926d5.mount has successfully entered the 'dead' state. Jan 23 17:44:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5041f90d20ca1d3b954ba7047b69b7c0aee9fb5ff8eba42b63fa26f6d71ba28-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:44:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:54.996187 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:44:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:54.996517751Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:54.996557559Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.007864925Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4d6039c1-b0ec-4f0b-82a8-25c3bc65aef4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.007891838Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.289891208Z" level=info msg="NetworkStart: stopping network for sandbox 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.290026415Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/6346b5e3-45c0-473c-b774-9d7ab2250a7f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.290049833Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.290056979Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.290062932Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:55.996215 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:44:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:55.996311 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.996567622Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.996600882Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.996700175Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:55.996742964Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:55.997118 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:44:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:44:55.997649 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.010851765Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ecc24693-5c20-48ca-9768-221adea57c01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.010872807Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.012818634Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/2a1c3a56-73d7-4fa1-80ab-b639eaf7fb41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.012840103Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:44:56.996102 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.996738954Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:44:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:56.996786543Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:44:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:57.008783298Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/417f4aae-78b0-43e0-85ef-0d6716ba5232 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:44:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:57.008806487Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:44:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:44:58.143183223Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.037299285Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.037343741Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount has successfully entered the 'dead' state. Jan 23 17:45:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount has successfully entered the 'dead' state. Jan 23 17:45:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2bd36f3f\x2de59e\x2d4742\x2db95d\x2d0f42539b57ed.mount has successfully entered the 'dead' state. Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.082361398Z" level=info msg="runSandbox: deleting pod ID bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a from idIndex" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.082386442Z" level=info msg="runSandbox: removing pod sandbox bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.082405074Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.082417744Z" level=info msg="runSandbox: unmounting shmPath for sandbox bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.098465533Z" level=info msg="runSandbox: removing pod sandbox from storage: bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.101662171Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:00.101681938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=54d42aad-df5f-4add-a998-e6f1f9607bbf name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:00.102035 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:45:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:00.102185 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:45:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:00.102216 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:45:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:00.102270 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(bd0afe6178847bc911508393201eb7258f2f8b8d1fd67ee1a074d2064d23d11a): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:45:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:03.996051 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:45:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:03.996410891Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:03.996453962Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:45:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:04.008848045Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/168444cf-b411-47fa-96e9-86dcc3bc8215 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:04.008867563Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:06.310191064Z" level=info msg="NetworkStart: stopping network for sandbox 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:06.310344511Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/70c6a60d-28e8-4293-b70b-9e528e200960 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:06.310368940Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:06.310376450Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:06.310382728Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:10.996038 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:45:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:10.996390239Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:10.996428902Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:45:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:10.996709 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:45:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:10.997210 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:45:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:11.007554694Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/0fb316dc-9434-4672-aa83-5de08949f30a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:11.007753897Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344411770Z" level=info msg="NetworkStart: stopping network for sandbox 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344605885Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/9f2bf18a-c901-4ec9-a749-547aa95446b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344626468Z" level=info msg="NetworkStart: stopping network for sandbox 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344637914Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344703888Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344711802Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344757844Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/ce630d16-04af-490b-bc45-79189c00749e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344781664Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344788081Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.344794514Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.348371473Z" level=info msg="NetworkStart: stopping network for sandbox 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.348479329Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/b97f78f2-c1b8-466a-b609-df6f74268051 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.348500099Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.348506312Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.348512259Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.349185829Z" level=info msg="NetworkStart: stopping network for sandbox aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.349334190Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/6b2963f9-572b-4ffd-92d8-d2e101755141 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.349360048Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.349367225Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.349373510Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.351053540Z" level=info msg="NetworkStart: stopping network for sandbox e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.351182174Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/237c4715-fc7d-47d3-b70b-56db5da06ee4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.351214589Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.351222856Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:16.351230505Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:21.997194 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:45:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:21.997745 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.020092160Z" level=info msg="NetworkStart: stopping network for sandbox df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.020249420Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/45ac8ba5-6e4f-4086-aea0-83323237afdb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.020271898Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.020278594Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.020286270Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910328 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910347 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910354 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910360 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910367 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910374 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:27.910380 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.912738682Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=3a7c0f23-1d87-4700-b4fb-c43c6c37a485 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:45:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:27.912857970Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3a7c0f23-1d87-4700-b4fb-c43c6c37a485 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.024406901Z" level=info msg="NetworkStart: stopping network for sandbox 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.024530907Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/85838751-231f-4ff1-8f4a-9ba5b90eb4f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.024549836Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.024556594Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.024562770Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.026333551Z" level=info msg="NetworkStart: stopping network for sandbox 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.026434524Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/e0192e74-0dcf-4494-b0ca-2a5ec665ac69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.026454559Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.026460978Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.026467054Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:28.142342655Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:45:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:29.020184200Z" level=info msg="NetworkStart: stopping network for sandbox 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:29.020376960Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/8cd84912-e4fc-4860-ae68-7a96e3c8db9b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:29.020401312Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:29.020407995Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:29.020414222Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:34.997019 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:45:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:34.997651 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:45:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:36.020020417Z" level=info msg="NetworkStart: stopping network for sandbox c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:36.020163647Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/bee26b02-51c6-4172-842f-2ede09c63101 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:36.020185913Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:36.020193085Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:36.020199587Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.022274742Z" level=info msg="NetworkStart: stopping network for sandbox db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.022444580Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4d6039c1-b0ec-4f0b-82a8-25c3bc65aef4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.022470260Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.022477062Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.022486244Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.301333942Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.301373494Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount has successfully entered the 'dead' state. Jan 23 17:45:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount has successfully entered the 'dead' state. Jan 23 17:45:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6346b5e3\x2d45c0\x2d473c\x2db774\x2d9d7ab2250a7f.mount has successfully entered the 'dead' state. Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.347307565Z" level=info msg="runSandbox: deleting pod ID 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0 from idIndex" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.347334750Z" level=info msg="runSandbox: removing pod sandbox 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.347348800Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.347360835Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.363449007Z" level=info msg="runSandbox: removing pod sandbox from storage: 1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.366815124Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.366834568Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=8a5609d9-740a-4552-9edf-a7e604ea767b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:40.367067 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:40.367111 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:40.367135 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:40.367184 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(1f991c0ab281a0e1d116000be4beeefb88cd22367ca362974b6bfc986fde29a0): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:45:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:40.433378 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.433664847Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.433696311Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.449419472Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/9e287a25-44ff-4fd3-86ee-d0e29c11efda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:40.449444762Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.024824571Z" level=info msg="NetworkStart: stopping network for sandbox 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.024965856Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ecc24693-5c20-48ca-9768-221adea57c01 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.024989127Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.024995445Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025001044Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025020271Z" level=info msg="NetworkStart: stopping network for sandbox c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025148646Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/2a1c3a56-73d7-4fa1-80ab-b639eaf7fb41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025170436Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025177508Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:41.025184243Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:42.021987672Z" level=info msg="NetworkStart: stopping network for sandbox 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:42.022125582Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/417f4aae-78b0-43e0-85ef-0d6716ba5232 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:42.022148396Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:42.022154930Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:42.022160702Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:49.022041736Z" level=info msg="NetworkStart: stopping network for sandbox a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:49.022191716Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/168444cf-b411-47fa-96e9-86dcc3bc8215 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:49.022224580Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:49.022230981Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:49.022237512Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:49.997018 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:45:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:49.997507 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.323197884Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.323251090Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount has successfully entered the 'dead' state. Jan 23 17:45:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount has successfully entered the 'dead' state. Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.366319640Z" level=info msg="runSandbox: deleting pod ID 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e from idIndex" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.366356026Z" level=info msg="runSandbox: removing pod sandbox 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.366373193Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.366386402Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.383475880Z" level=info msg="runSandbox: removing pod sandbox from storage: 1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.386663104Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.386684042Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=dfdf1ad9-0366-4683-92f5-8b120cb7a6b6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:51.386893 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:45:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:51.386941 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:51.386964 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:45:51.387014 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:45:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:45:51.453663 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.453967586Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.454005978Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:45:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-70c6a60d\x2d28e8\x2d4293\x2db70b\x2d9e528e200960.mount has successfully entered the 'dead' state. Jan 23 17:45:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1e18c57cf7b6aa569fd56c0ae0816785c9d6eea5588c343283518c1355fed11e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.465042632Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/b2881ddd-427f-43e6-8d1c-e9d63b419f75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:51.465067858Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:56.021407356Z" level=info msg="NetworkStart: stopping network for sandbox a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:45:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:56.021551166Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/0fb316dc-9434-4672-aa83-5de08949f30a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:45:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:56.021574892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:45:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:56.021583557Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:45:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:56.021589555Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:45:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:45:58.142314080Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.355622413Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.355662425Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.356607706Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.356635472Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.359801392Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.359836676Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.360156303Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.360192811Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.363275665Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.363310888Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405314823Z" level=info msg="runSandbox: deleting pod ID 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6 from idIndex" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405344935Z" level=info msg="runSandbox: removing pod sandbox 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405358860Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405373707Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405315586Z" level=info msg="runSandbox: deleting pod ID 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1 from idIndex" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405425467Z" level=info msg="runSandbox: removing pod sandbox 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405440502Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.405453195Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409287402Z" level=info msg="runSandbox: deleting pod ID 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb from idIndex" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409314570Z" level=info msg="runSandbox: removing pod sandbox 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409318081Z" level=info msg="runSandbox: deleting pod ID aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1 from idIndex" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409352525Z" level=info msg="runSandbox: removing pod sandbox aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409367621Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409382239Z" level=info msg="runSandbox: unmounting shmPath for sandbox aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409328003Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409454986Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409291179Z" level=info msg="runSandbox: deleting pod ID e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e from idIndex" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409558337Z" level=info msg="runSandbox: removing pod sandbox e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409572222Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.409582823Z" level=info msg="runSandbox: unmounting shmPath for sandbox e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.421461569Z" level=info msg="runSandbox: removing pod sandbox from storage: 4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.421476016Z" level=info msg="runSandbox: removing pod sandbox from storage: 8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.424464204Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.424483013Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=fa1ed745-a483-4b4e-82b0-7febb0b5a72d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.424702 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.424748 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.424771 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.424816 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.425464913Z" level=info msg="runSandbox: removing pod sandbox from storage: 4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.425468678Z" level=info msg="runSandbox: removing pod sandbox from storage: e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.425487838Z" level=info msg="runSandbox: removing pod sandbox from storage: aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.427732222Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.427753899Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=d4fc12fa-ba9b-4a5d-a564-bbb23c860201 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.427989 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.428030 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.428054 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.428099 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.430781141Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.430798004Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=4d11f93e-e4d7-4d66-9f95-d9bc97eeecd8 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.431003 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.431037 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.431056 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.431093 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.433688543Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.433705995Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=64e61636-9ccb-4e67-b3f4-1cc6f8eda182 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.433898 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.433932 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.433956 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.433999 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.439114146Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.439586572Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=1c179e78-8f52-482a-913e-3ba7df38229c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.439805 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.439837 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.439857 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.439892 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-237c4715\x2dfc7d\x2d47d3\x2db70b\x2d56db5da06ee4.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6b2963f9\x2d572b\x2d4ffd\x2d92d8\x2dd2e101755141.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b97f78f2\x2dc1b8\x2d466a\x2db609\x2ddf6f74268051.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9f2bf18a\x2dc901\x2d4ec9\x2da749\x2d547aa95446b5.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ce630d16\x2d04af\x2d490b\x2dbc45\x2d79189c00749e.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e1b95ab2a3a23f42192b0cdab01660b62ce5af4d4848f3da4fec5e48718d4e5e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-aaa62be61b6406c1fbd7d17d2f0c89ad649ac39615ea842385daa0db6b1a3bf1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4f24918a5c387047955df5d1ebda937eb63d410734a44577744c7ba2e1619ac6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4dda58903a4a44e21eae2d0ffb27f71c75902cd1118dd62c61deb7a7a4499bbb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8271d95c11ad4f9af498cae0a6c5a47014e8e36f2929d8144afbdb6cf92106f1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.472766 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.472922 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.473022 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.473133 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473155052Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473197776Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.473213 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473308548Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473343839Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473416682Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473444981Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473520947Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473551132Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473577011Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.473595932Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.500502342Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a5dd2dd8-b51b-4fba-80bc-7858bbcc1b13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.500527224Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.501106756Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/82cfa1f8-2137-4e78-9411-5969bac13519 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.501128219Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.502383111Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cb03805d-13e8-4b9b-ac65-cb93a277aa04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.502401213Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.503190441Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/178419fa-aa99-442d-b551-d34002a45d99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.503217315Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.505845065Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5dc73034-15dc-4dd8-af9c-d8b49d0941c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:01.505866948Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:01.996830 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:46:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:01.997529 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495968.1179] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495968.1184] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495968.1185] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495968.1392] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:46:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674495968.1394] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.031406413Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.031648823Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount has successfully entered the 'dead' state. Jan 23 17:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount has successfully entered the 'dead' state. Jan 23 17:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-45ac8ba5\x2d6e4f\x2d4086\x2daea0\x2d83323237afdb.mount has successfully entered the 'dead' state. Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.075283499Z" level=info msg="runSandbox: deleting pod ID df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53 from idIndex" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.075306183Z" level=info msg="runSandbox: removing pod sandbox df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.075322894Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.075335003Z" level=info msg="runSandbox: unmounting shmPath for sandbox df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.095426902Z" level=info msg="runSandbox: removing pod sandbox from storage: df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.098506103Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:12.098524490Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ea8db71f-7084-410f-a25d-7dd23c951759 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:12.098743 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:12.098795 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:12.098820 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:12.098873 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(df342654042b465b727a6170005f7c1abafceb6eebd854791ddd1084a5348c53): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.035622238Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.035652293Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.036950373Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.036979063Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e0192e74\x2d0dcf\x2d4494\x2db0ca\x2d2a5ec665ac69.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab systemd[1]: run-netns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-85838751\x2d231f\x2d4ff1\x2d8f4a\x2d9ba5b90eb4f6.mount has successfully entered the 'dead' state. Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072335830Z" level=info msg="runSandbox: deleting pod ID 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807 from idIndex" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072362195Z" level=info msg="runSandbox: removing pod sandbox 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072374580Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072387476Z" level=info msg="runSandbox: unmounting shmPath for sandbox 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072341760Z" level=info msg="runSandbox: deleting pod ID 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec from idIndex" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072456257Z" level=info msg="runSandbox: removing pod sandbox 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072470450Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.072482149Z" level=info msg="runSandbox: unmounting shmPath for sandbox 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.084458632Z" level=info msg="runSandbox: removing pod sandbox from storage: 49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.084489608Z" level=info msg="runSandbox: removing pod sandbox from storage: 06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.087967046Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.087984127Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=2838554a-c24c-447a-8d2f-a14ec31c7a73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.088190 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.088243 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.088266 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.088315 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.091240621Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:13.091265229Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=bfa939d5-dd34-47b0-a44f-e071e5f66ae6 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.091453 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.091495 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.091519 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:13.091565 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.031117861Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.031153924Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-06ef2dfe799af5df649f49c57b13cf9f71ba6b77f74279f201f6ed53286da807-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-49cc453e552428212054dd9d9cc90750ac03a0677c4d6241e2527b0ae83080ec-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8cd84912\x2de4fc\x2d4860\x2dae68\x2d7a96e3c8db9b.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.077312904Z" level=info msg="runSandbox: deleting pod ID 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43 from idIndex" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.077337845Z" level=info msg="runSandbox: removing pod sandbox 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.077352144Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.077364246Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.097421861Z" level=info msg="runSandbox: removing pod sandbox from storage: 9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.100891144Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:14.100910926Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=119dda76-5a95-4031-81f4-d4aebb5211d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:14.101115 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:14.101153 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:46:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:14.101174 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:46:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:14.101222 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(9c66db93114dc70747411dc26740c1a8e09863e40607044bcf40bcdcc7abee43): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:46:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:15.997227 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:46:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:15.997726 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:46:18 hub-master-0.workload.bos2.lab conmon[160204]: conmon 9e98c79ccd8ab7067bcf : container 160216 exited with status 1 Jan 23 17:46:18 hub-master-0.workload.bos2.lab systemd[1]: crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has successfully entered the 'dead' state. Jan 23 17:46:18 hub-master-0.workload.bos2.lab systemd[1]: crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope: Consumed 3.691s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope completed and consumed the indicated resources. Jan 23 17:46:18 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope has successfully entered the 'dead' state. Jan 23 17:46:18 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope: Consumed 53ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f.scope completed and consumed the indicated resources. Jan 23 17:46:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:19.508374 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f" exitCode=1 Jan 23 17:46:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:19.508452 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f} Jan 23 17:46:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:19.508555 8631 scope.go:115] "RemoveContainer" containerID="3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4" Jan 23 17:46:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:19.508839 8631 scope.go:115] "RemoveContainer" containerID="9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.509426678Z" level=info msg="Removing container: 3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4" id=1a3b1f16-b2e4-4721-a23a-16ae55386fa0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.509448383Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=682a896c-b7b3-4eeb-a6cb-0ada03263ffb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.509666321Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=682a896c-b7b3-4eeb-a6cb-0ada03263ffb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.510132989Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=3a87dd84-f968-4c45-9225-989a81a7e1e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.510239549Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3a87dd84-f968-4c45-9225-989a81a7e1e8 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.510728020Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=75c32e8b-f220-436c-b706-13a8316c620e name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.510795289Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:19 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-a468c6f57864a1089001915bd3cdf215afe50801c7b5e730eb5df6c0295a4d36-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-a468c6f57864a1089001915bd3cdf215afe50801c7b5e730eb5df6c0295a4d36-merged.mount has successfully entered the 'dead' state. Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.552954417Z" level=info msg="Removed container 3ef12765fbf1322338839df28a28171a94969abee1d385ecd7c45b5eea6602c4: openshift-multus/multus-cdt6c/kube-multus" id=1a3b1f16-b2e4-4721-a23a-16ae55386fa0 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:46:19 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope. -- Subject: Unit crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:46:19 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f. -- Subject: Unit crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.668798566Z" level=info msg="Created container 77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f: openshift-multus/multus-cdt6c/kube-multus" id=75c32e8b-f220-436c-b706-13a8316c620e name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.669220604Z" level=info msg="Starting container: 77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f" id=943db53d-c50c-447c-94aa-0c0c5f898e09 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.675909842Z" level=info msg="Started container" PID=178382 containerID=77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f description=openshift-multus/multus-cdt6c/kube-multus id=943db53d-c50c-447c-94aa-0c0c5f898e09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.680647146Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_bf069688-1ac5-4876-ad5c-15f8562471c0\"" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.690427306Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.690443946Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.703036487Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.712849366Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.712864954Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:46:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:19.712875589Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_bf069688-1ac5-4876-ad5c-15f8562471c0\"" Jan 23 17:46:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:20.510929 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f} Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.030506641Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.030547603Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount has successfully entered the 'dead' state. Jan 23 17:46:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount has successfully entered the 'dead' state. Jan 23 17:46:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bee26b02\x2d51c6\x2d4172\x2d842f\x2d2ede09c63101.mount has successfully entered the 'dead' state. Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.083315223Z" level=info msg="runSandbox: deleting pod ID c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f from idIndex" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.083341162Z" level=info msg="runSandbox: removing pod sandbox c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.083360019Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.083371444Z" level=info msg="runSandbox: unmounting shmPath for sandbox c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.095464204Z" level=info msg="runSandbox: removing pod sandbox from storage: c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.099026323Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:21.099043766Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=fb315b49-144f-47b9-a2bc-c9ddf68e5348 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:21.099214 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:21.099272 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:46:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:21.099312 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:46:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:21.099373 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c18a1cdf97e4ead0f076d96c878086ad7373f4758befc5c8ef742a007e05e18f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:46:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:24.995810 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:46:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:24.996120700Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:24.996160332Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.007979702Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/9d1b8d6d-3232-44cd-ac88-b5cd46a067ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.007999559Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.033736803Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.033770873Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount has successfully entered the 'dead' state. Jan 23 17:46:25 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount has successfully entered the 'dead' state. Jan 23 17:46:25 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4d6039c1\x2db0ec\x2d4f0b\x2d82a8\x2d25c3bc65aef4.mount has successfully entered the 'dead' state. Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.091304625Z" level=info msg="runSandbox: deleting pod ID db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581 from idIndex" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.091331172Z" level=info msg="runSandbox: removing pod sandbox db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.091345469Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.091358623Z" level=info msg="runSandbox: unmounting shmPath for sandbox db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.107433057Z" level=info msg="runSandbox: removing pod sandbox from storage: db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.110182395Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.110201885Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=5bba1874-c98a-4f5b-8dff-5145b98b2b9d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:25.110421 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:25.110463 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:25.110485 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:46:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:25.110534 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.464026091Z" level=info msg="NetworkStart: stopping network for sandbox afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.464172751Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/9e287a25-44ff-4fd3-86ee-d0e29c11efda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.464201091Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.464215418Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:25.464222656Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-db2885a43e4cca5581559d106514878a72315ce7f34eab9035640b832ff57581-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.036013885Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.036042817Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.036061936Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.036097462Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a1c3a56\x2d73d7\x2d4fa1\x2d80ab\x2db639eaf7fb41.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ecc24693\x2d5c20\x2d48ca\x2d9768\x2d221adea57c01.mount has successfully entered the 'dead' state. Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073315845Z" level=info msg="runSandbox: deleting pod ID c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e from idIndex" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073343320Z" level=info msg="runSandbox: removing pod sandbox c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073356857Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073368551Z" level=info msg="runSandbox: unmounting shmPath for sandbox c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073319532Z" level=info msg="runSandbox: deleting pod ID 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a from idIndex" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073429157Z" level=info msg="runSandbox: removing pod sandbox 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073442707Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.073454111Z" level=info msg="runSandbox: unmounting shmPath for sandbox 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.093444612Z" level=info msg="runSandbox: removing pod sandbox from storage: c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.093455891Z" level=info msg="runSandbox: removing pod sandbox from storage: 518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.096988063Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.097007952Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=534f51f3-9718-4218-9ce2-e9d4b6d079e7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.097235 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.097279 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.097304 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.097352 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.100024734Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.100043355Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=a9bbc278-2996-4259-ab3c-e4de955ad0a3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.100261 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.100299 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.100324 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.100368 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:26.996227 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.996565741Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:26.996598510Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:26.996643 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:46:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:26.997146 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:46:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c869e7a46ac2ae0d5719deb1944d4c1a1d2b91deb27a90e592c57796a602f90e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-518ef19a77d516399aad1532481e119f52876a82743922d707c863863faa3e5a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.008374046Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/d7c35065-b086-419d-8205-14106cdcb8d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.008399536Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.034293991Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.034327533Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount has successfully entered the 'dead' state. Jan 23 17:46:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount has successfully entered the 'dead' state. Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.086306914Z" level=info msg="runSandbox: deleting pod ID 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3 from idIndex" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.086332640Z" level=info msg="runSandbox: removing pod sandbox 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.086345929Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.086358835Z" level=info msg="runSandbox: unmounting shmPath for sandbox 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.111428520Z" level=info msg="runSandbox: removing pod sandbox from storage: 132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.114143067Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.114161829Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=2ef31ccb-accf-468a-8101-90b4dc9c5597 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:27.114454 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:27.114497 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:27.114521 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:27.114567 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910876 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910895 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910902 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910914 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910921 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910934 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.910942 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.996330 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:46:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:27.996498 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.997109738Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.997143643Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.997142667Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:27.997262870Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-417f4aae\x2d78b0\x2d43e0\x2d85ef\x2d0d6716ba5232.mount has successfully entered the 'dead' state. Jan 23 17:46:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-132a44f07bb42ea592e44f2132ebb98370c90481b61cf38609a3db890e6a4aa3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:28.016965681Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/48c63a36-3a88-4237-9748-1382657fa5da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:28.016996112Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:28.016981519Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/54b9cfbc-8a60-43ec-bca6-775c3deecc0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:28.017080522Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:28.143402770Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:46:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:32.995878 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:46:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:32.996407872Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:32.996450696Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:33.007467990Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b41234bc-63a9-4461-927e-8b09a071d53f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:33.007488607Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.032958530Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.032993459Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount has successfully entered the 'dead' state. Jan 23 17:46:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount has successfully entered the 'dead' state. Jan 23 17:46:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-168444cf\x2db411\x2d47fa\x2d96e9\x2d86dcc3bc8215.mount has successfully entered the 'dead' state. Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.077284984Z" level=info msg="runSandbox: deleting pod ID a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c from idIndex" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.077314264Z" level=info msg="runSandbox: removing pod sandbox a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.077327497Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.077341600Z" level=info msg="runSandbox: unmounting shmPath for sandbox a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.093406213Z" level=info msg="runSandbox: removing pod sandbox from storage: a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.096666190Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:34.096686827Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=137522d1-f3bf-43be-9046-a5022d7ce22a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:34.096900 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:34.096943 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:34.096966 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:34.097009 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(a767d262733d7be3a58b7581a1068aef1a4b6ce47de06e9869822e07433da83c): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.478689980Z" level=info msg="NetworkStart: stopping network for sandbox e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.478843568Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/b2881ddd-427f-43e6-8d1c-e9d63b419f75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.478866059Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.478872629Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.478878770Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:36.996458 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.996796454Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:36.997032074Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:37.008228655Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d1fc2ff2-19bd-4211-9718-6eb1bf2886f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:37.008248151Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:37.996193 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:46:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:37.996592708Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:37.996641373Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:38.007380281Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/e6dc52b2-c38b-4ddd-8fb9-a16057ce21c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:38.007402496Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:38.996467 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:46:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:38.996969 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:46:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:39.995856 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:39.998908599Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:39.998966048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:40.013726795Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/53c9c642-2260-4632-aedf-b13a6fbe5511 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:40.013756939Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.032803871Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.032842548Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount has successfully entered the 'dead' state. Jan 23 17:46:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount has successfully entered the 'dead' state. Jan 23 17:46:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0fb316dc\x2d9434\x2d4672\x2daa83\x2d5de08949f30a.mount has successfully entered the 'dead' state. Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.082306558Z" level=info msg="runSandbox: deleting pod ID a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0 from idIndex" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.082331496Z" level=info msg="runSandbox: removing pod sandbox a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.082347373Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.082360171Z" level=info msg="runSandbox: unmounting shmPath for sandbox a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.097437308Z" level=info msg="runSandbox: removing pod sandbox from storage: a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.100441267Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.100459559Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=f28e2b12-0f6b-45fd-aef3-c88356985435 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:41.100683 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:41.100731 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:41.100755 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:41.100804 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a1ee04a0e584d51fbf9328c61698912dab4f70b0cc0abd2d6e8ecc9f431bc0e0): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:41.483198 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:41.499981 8631 csr.go:261] certificate signing request csr-h4wjc is approved, waiting to be issued Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:41.502906 8631 csr.go:257] certificate signing request csr-h4wjc is issued Jan 23 17:46:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:41.996347 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.996738005Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:41.996796292Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:42.008220138Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/189bd401-8a89-499b-917a-527a83510af1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:42.008243906Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:42.503792 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-02-22 14:05:15 +0000 UTC, rotation deadline is 2023-02-14 12:07:15.128493921 +0000 UTC Jan 23 17:46:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:42.503816 8631 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 522h20m32.624679341s for next certificate rotation Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.513966849Z" level=info msg="NetworkStart: stopping network for sandbox f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514132212Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a5dd2dd8-b51b-4fba-80bc-7858bbcc1b13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514157362Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514165240Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514172539Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514638041Z" level=info msg="NetworkStart: stopping network for sandbox f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514756564Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/178419fa-aa99-442d-b551-d34002a45d99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514779779Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514786881Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514793705Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.514754618Z" level=info msg="NetworkStart: stopping network for sandbox 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515060463Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/82cfa1f8-2137-4e78-9411-5969bac13519 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515091622Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515102208Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515110290Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515233869Z" level=info msg="NetworkStart: stopping network for sandbox b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515370921Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cb03805d-13e8-4b9b-ac65-cb93a277aa04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515394202Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515400568Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.515406438Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.521350153Z" level=info msg="NetworkStart: stopping network for sandbox b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.521466718Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5dc73034-15dc-4dd8-af9c-d8b49d0941c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.521488894Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.521498222Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:46:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:46.521506002Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:47.996477 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:46:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:47.996987842Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:47.997029585Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:48.008409858Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/fd930797-666c-4322-ab47-6ee5c46d66da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:48.008446763Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:48.087883 8631 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 17:46:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:52.996166 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:46:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:52.996618278Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:46:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:52.996658424Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:46:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:53.007777767Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d696fbaf-6315-4c0e-b843-ab63788b8c6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:46:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:53.007801580Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:46:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:46:53.997162 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:46:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:46:53.997679 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:46:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:46:58.142382533Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:47:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:06.997075 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:47:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:06.997594 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.022639521Z" level=info msg="NetworkStart: stopping network for sandbox f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.023003055Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/9d1b8d6d-3232-44cd-ac88-b5cd46a067ba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.023027454Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.023033371Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.023040326Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.476507438Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.476545265Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount has successfully entered the 'dead' state. Jan 23 17:47:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount has successfully entered the 'dead' state. Jan 23 17:47:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9e287a25\x2d44ff\x2d4fd3\x2d86ee\x2dd0e29c11efda.mount has successfully entered the 'dead' state. Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.512317755Z" level=info msg="runSandbox: deleting pod ID afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a from idIndex" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.512345095Z" level=info msg="runSandbox: removing pod sandbox afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.512359736Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.512372851Z" level=info msg="runSandbox: unmounting shmPath for sandbox afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.524444449Z" level=info msg="runSandbox: removing pod sandbox from storage: afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.527754461Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.527774679Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=5354f32e-78d8-4456-b32c-f497f4f994bd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:10.528007 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:10.528053 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:10.528076 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:10.528125 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(afc75046c16679715978423fd6905292b7f9446f3c4953a66dea50a361028f2a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:47:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:10.620443 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.620648219Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.620678623Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.637147915Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/d3d4d9b1-7f21-438c-9506-0e074b855b26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:10.637171642Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:12.022254488Z" level=info msg="NetworkStart: stopping network for sandbox 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:12.022402192Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/d7c35065-b086-419d-8205-14106cdcb8d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:12.022425215Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:12.022431767Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:12.022438130Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.030416596Z" level=info msg="NetworkStart: stopping network for sandbox e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.030550211Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/48c63a36-3a88-4237-9748-1382657fa5da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.030576671Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.030583275Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.030589231Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.031609545Z" level=info msg="NetworkStart: stopping network for sandbox 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.031757914Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/54b9cfbc-8a60-43ec-bca6-775c3deecc0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.031784750Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.031794603Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:13.031801811Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:18.021809789Z" level=info msg="NetworkStart: stopping network for sandbox 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:18.021984336Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/b41234bc-63a9-4461-927e-8b09a071d53f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:18.022005963Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:18.022013264Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:18.022019136Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:19.996838 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:47:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:19.997348 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.489822653Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.489862362Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount has successfully entered the 'dead' state. Jan 23 17:47:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount has successfully entered the 'dead' state. Jan 23 17:47:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b2881ddd\x2d427f\x2d43e6\x2d8d1c\x2de9d63b419f75.mount has successfully entered the 'dead' state. Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.523283057Z" level=info msg="runSandbox: deleting pod ID e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90 from idIndex" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.523310610Z" level=info msg="runSandbox: removing pod sandbox e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.523327171Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.523339931Z" level=info msg="runSandbox: unmounting shmPath for sandbox e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.535459597Z" level=info msg="runSandbox: removing pod sandbox from storage: e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.538484290Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.538502083Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=c7857315-b27b-49e3-afc7-cf5323990f29 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:21.538727 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:21.538771 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:21.538794 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:21.538846 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(e9def468a9a4014e7c64ecb1fb1042bbd3c4a7b9c22158770bd645b28ab5db90): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:47:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:21.641225 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.641552423Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.641587730Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.652931710Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c0c44627-1703-4991-89c2-e49f3e3393c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:21.652952048Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:22.022792395Z" level=info msg="NetworkStart: stopping network for sandbox 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:22.022929634Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/d1fc2ff2-19bd-4211-9718-6eb1bf2886f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:22.022952601Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:22.022961217Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:22.022967823Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:23.019363877Z" level=info msg="NetworkStart: stopping network for sandbox 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:23.019496301Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/e6dc52b2-c38b-4ddd-8fb9-a16057ce21c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:23.019519688Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:23.019525917Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:23.019531804Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:25.027725623Z" level=info msg="NetworkStart: stopping network for sandbox c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:25.027871251Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/53c9c642-2260-4632-aedf-b13a6fbe5511 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:25.027895937Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:25.027902722Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:25.027908881Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:27.023979375Z" level=info msg="NetworkStart: stopping network for sandbox 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:27.024125991Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/189bd401-8a89-499b-917a-527a83510af1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:27.024150266Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:27.024157061Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:27.024163378Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911780 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911923 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911929 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911935 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911941 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911948 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:27.911954 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:47:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:28.140921002Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.525985923Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526033045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526202151Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526243903Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526429990Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526455790Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526713614Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.526764692Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.533134861Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.533168315Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount has successfully entered the 'dead' state. Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576339327Z" level=info msg="runSandbox: deleting pod ID b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f from idIndex" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576337693Z" level=info msg="runSandbox: deleting pod ID 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d from idIndex" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576392155Z" level=info msg="runSandbox: removing pod sandbox 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576408641Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576419769Z" level=info msg="runSandbox: unmounting shmPath for sandbox 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576340118Z" level=info msg="runSandbox: deleting pod ID f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3 from idIndex" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576469406Z" level=info msg="runSandbox: removing pod sandbox f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576483028Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576497081Z" level=info msg="runSandbox: unmounting shmPath for sandbox f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576379712Z" level=info msg="runSandbox: removing pod sandbox b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576559683Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.576578454Z" level=info msg="runSandbox: unmounting shmPath for sandbox b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.577296386Z" level=info msg="runSandbox: deleting pod ID f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b from idIndex" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.577319055Z" level=info msg="runSandbox: removing pod sandbox f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.577331364Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.577345064Z" level=info msg="runSandbox: unmounting shmPath for sandbox f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.580374351Z" level=info msg="runSandbox: deleting pod ID b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397 from idIndex" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.580399772Z" level=info msg="runSandbox: removing pod sandbox b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.580413157Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.580425860Z" level=info msg="runSandbox: unmounting shmPath for sandbox b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.589489400Z" level=info msg="runSandbox: removing pod sandbox from storage: 49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.589527481Z" level=info msg="runSandbox: removing pod sandbox from storage: f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.589533083Z" level=info msg="runSandbox: removing pod sandbox from storage: b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.590456865Z" level=info msg="runSandbox: removing pod sandbox from storage: f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.590475128Z" level=info msg="runSandbox: removing pod sandbox from storage: b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.592761245Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.592782766Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=12c28269-d1a7-45b6-9878-f42216d6aaef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.593024 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.593074 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.593098 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.593149 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.600863736Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.600888361Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=422a65f9-9587-4bfa-815e-7b0a2bba7bb5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.601105 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.601143 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.601164 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.601211 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.604308103Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.604332121Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=0c225aa5-cff4-477c-9a80-2c0d82527a10 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.604555 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.604590 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.604611 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.604647 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.608044463Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.608066516Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=693dc6e6-e843-4625-92f4-bfafd6f6fdeb name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.608284 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.608316 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.608336 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.608371 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.611622301Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.611642118Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e8f1e6ab-5e4e-4519-acbf-d99b38c9f838 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.611836 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.611871 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.611891 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.611928 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.661663 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.661707 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.661799 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.661952 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.662074 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662125171Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662165842Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662189469Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662221020Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662264713Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662125923Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662324009Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662274098Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662380947Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.662304826Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.689187035Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7ab69f46-09ef-4cd0-a17b-ed850389aaa4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.689211670Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.690085854Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3c65d715-2be0-496e-8ecd-99b9a50e7940 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.690109922Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.692502379Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/d330b8ef-4e6a-42de-a35b-dd575bd03c6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.692523496Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.693769296Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5f293b17-0b89-4f11-96fb-7d19038a9057 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.693790794Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.694483861Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/825ff599-dd9a-4044-9275-e48e82425f84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:31.694502235Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:31.996695 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:47:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:31.997214 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5dc73034\x2d15dc\x2d4dd8\x2daf9c\x2dd8b49d0941c6.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-178419fa\x2daa99\x2d442d\x2db551\x2dd34002a45d99.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cb03805d\x2d13e8\x2d4b9b\x2dac65\x2dcb93a277aa04.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-82cfa1f8\x2d2137\x2d4e78\x2d9411\x2d5969bac13519.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a5dd2dd8\x2db51b\x2d4fba\x2d80bc\x2d7858bbcc1b13.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-49386f6be53e7d302ae452e33af7cc196a9483a65fe3bdc4dc66688e0fbead2d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b8657c40085a1b4479304f78861a0037a551eef6cb74fd4595c56718e6127397-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f2ff5fa5419a38754334b01abe5c68ebc34404771bbe1ffaa127a90d59adb8a3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b387f9de2d181c48a6836101cad9a2032284b41988028adad621658c7f8e709f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f8c5a194a610880eb484af6b68d421984794dca1358ffec5171b6e9ed869b00b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:33.022630828Z" level=info msg="NetworkStart: stopping network for sandbox b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:33.022985385Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/fd930797-666c-4322-ab47-6ee5c46d66da Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:33.023008795Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:33.023016972Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:33.023023534Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:38.021032853Z" level=info msg="NetworkStart: stopping network for sandbox 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:38.021197687Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d696fbaf-6315-4c0e-b843-ab63788b8c6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:38.021231500Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:38.021239430Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:38.021246389Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:45.996900 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:47:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:45.997412 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.036003138Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.036040443Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount has successfully entered the 'dead' state. Jan 23 17:47:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount has successfully entered the 'dead' state. Jan 23 17:47:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9d1b8d6d\x2d3232\x2d44cd\x2dac88\x2db5cd46a067ba.mount has successfully entered the 'dead' state. Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.086311127Z" level=info msg="runSandbox: deleting pod ID f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5 from idIndex" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.086335488Z" level=info msg="runSandbox: removing pod sandbox f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.086350825Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.086368851Z" level=info msg="runSandbox: unmounting shmPath for sandbox f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.107491348Z" level=info msg="runSandbox: removing pod sandbox from storage: f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.110513355Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.110530411Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=931ffa14-220a-4e8e-8833-6fa44db5a316 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:55.110660 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:55.110815 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:47:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:55.110837 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:47:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:55.110884 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(f7492fb03e0ca35059b6d926ab60f405d20c5f9a3de8be7759295f1f58ef5ef5): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.651458536Z" level=info msg="NetworkStart: stopping network for sandbox c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.651586228Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/d3d4d9b1-7f21-438c-9506-0e074b855b26 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.651607539Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.651614359Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:47:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:55.651620302Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.032133790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.032169197Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount has successfully entered the 'dead' state. Jan 23 17:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount has successfully entered the 'dead' state. Jan 23 17:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d7c35065\x2db086\x2d419d\x2d8205\x2d14106cdcb8d7.mount has successfully entered the 'dead' state. Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.088280860Z" level=info msg="runSandbox: deleting pod ID 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77 from idIndex" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.088305499Z" level=info msg="runSandbox: removing pod sandbox 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.088319294Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.088330419Z" level=info msg="runSandbox: unmounting shmPath for sandbox 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.109440198Z" level=info msg="runSandbox: removing pod sandbox from storage: 10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.113010922Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:57.113029043Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=f920465b-0b63-4a28-ab3b-1ce2c2616902 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:57.113178 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:57.113234 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:57.113258 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:57.113311 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(10a56d78a6484f1abfe0d5a581974c813ebfae5a08d3c60dfdd35b72bccfcd77): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:57.997127 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:47:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:57.997670 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.041682594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.041718682Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.042130376Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.042173865Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount has successfully entered the 'dead' state. Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount has successfully entered the 'dead' state. Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount has successfully entered the 'dead' state. Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount has successfully entered the 'dead' state. Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: run-netns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-48c63a36\x2d3a88\x2d4237\x2d9748\x2d1382657fa5da.mount has successfully entered the 'dead' state. Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.094305205Z" level=info msg="runSandbox: deleting pod ID e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383 from idIndex" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.094330700Z" level=info msg="runSandbox: removing pod sandbox e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.094342775Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.094353733Z" level=info msg="runSandbox: unmounting shmPath for sandbox e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.098284755Z" level=info msg="runSandbox: deleting pod ID 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe from idIndex" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.098314592Z" level=info msg="runSandbox: removing pod sandbox 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.098330699Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.098345107Z" level=info msg="runSandbox: unmounting shmPath for sandbox 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.106412715Z" level=info msg="runSandbox: removing pod sandbox from storage: e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.109989425Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.110012058Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=a9cef618-0fad-4801-9d8b-cf7a164fc3b5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.110212 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.110257 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.110281 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.110326 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.110447501Z" level=info msg="runSandbox: removing pod sandbox from storage: 03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.113909989Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.113929826Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b2f80a1e-9fbe-474f-bfca-9483b374b8b0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.114125 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.114160 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.114180 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:47:58.114221 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.142806705Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.632921 8631 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hn42c] Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.632965 8631 topology_manager.go:205] "Topology Admit Handler" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.639415 8631 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hn42c] Jan 23 17:47:58 hub-master-0.workload.bos2.lab systemd[1]: Created slice libcontainer container kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice. -- Subject: Unit kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.782185 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzss\" (UniqueName: \"kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.782222 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.782289 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.782317 8631 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883399 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzss\" (UniqueName: \"kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883429 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883452 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883472 8631 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883604 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883700 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.883772 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.898247 8631 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzss\" (UniqueName: \"kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss\") pod \"cni-sysctl-allowlist-ds-hn42c\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:47:58.948955 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.949366324Z" level=info msg="Running pod sandbox: openshift-multus/cni-sysctl-allowlist-ds-hn42c/POD" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.949402107Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.963956402Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-hn42c Namespace:openshift-multus ID:e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361 UID:1f31a541-be8d-4508-96b0-75cb13604d3d NetNS:/var/run/netns/8b2a1797-80fd-4e02-9ece-7cb2ef5107a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:47:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:47:58.963979934Z" level=info msg="Adding pod openshift-multus_cni-sysctl-allowlist-ds-hn42c to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:47:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-54b9cfbc\x2d8a60\x2d43ec\x2dbca6\x2d775c3deecc0b.mount has successfully entered the 'dead' state. Jan 23 17:47:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-03819424dd42166967ca8067a5b2cb0372ce425cff505bebb483ee9144e886fe-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:47:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e38e306acd60dbc75afccc082fb058386102d24448381e2c3863ed395f7b4383-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.032580391Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.032617876Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount has successfully entered the 'dead' state. Jan 23 17:48:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount has successfully entered the 'dead' state. Jan 23 17:48:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b41234bc\x2d63a9\x2d4461\x2d927e\x2d8b09a071d53f.mount has successfully entered the 'dead' state. Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.082315969Z" level=info msg="runSandbox: deleting pod ID 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c from idIndex" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.082343259Z" level=info msg="runSandbox: removing pod sandbox 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.082357490Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.082370779Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.102449714Z" level=info msg="runSandbox: removing pod sandbox from storage: 9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.105374848Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:03.105394988Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=208f720d-a4d4-4d31-9f44-d78137edef44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:03.105589 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:03.105633 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:48:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:03.105655 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:48:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:03.105699 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(9e818dfb74e727903b3247b6b56c9f177e61dfb35fc60d3054fb15bd984e890c): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:06.668177939Z" level=info msg="NetworkStart: stopping network for sandbox 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:06.668416186Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/c0c44627-1703-4991-89c2-e49f3e3393c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:06.668440858Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:06.668448369Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:06.668454153Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.035020352Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.035055664Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount has successfully entered the 'dead' state. Jan 23 17:48:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount has successfully entered the 'dead' state. Jan 23 17:48:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d1fc2ff2\x2d19bd\x2d4211\x2d9718\x2d6eb1bf2886f3.mount has successfully entered the 'dead' state. Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.085307112Z" level=info msg="runSandbox: deleting pod ID 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3 from idIndex" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.085333637Z" level=info msg="runSandbox: removing pod sandbox 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.085348971Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.085371952Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.101443341Z" level=info msg="runSandbox: removing pod sandbox from storage: 2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.104892975Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.104911351Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=34b0fa54-43aa-4d5c-8b1f-319e9fc4fae2 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:07.105133 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:07.105187 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:07.105220 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:07.105280 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(2ef9b2e6eede7c432b5c88dd7a4dbe983e425f32779de2aa36b72f2fa15639e3): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:48:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:07.996695 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.997090009Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:07.997131284Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.008123242Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/6e4177f4-1af9-4359-a92d-adf66df0477e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.008145491Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.030831956Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.030863763Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount has successfully entered the 'dead' state. Jan 23 17:48:08 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount has successfully entered the 'dead' state. Jan 23 17:48:08 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e6dc52b2\x2dc38b\x2d4ddd\x2d8fb9\x2da16057ce21c2.mount has successfully entered the 'dead' state. Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.083314996Z" level=info msg="runSandbox: deleting pod ID 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2 from idIndex" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.083338100Z" level=info msg="runSandbox: removing pod sandbox 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.083350476Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.083362074Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.095475464Z" level=info msg="runSandbox: removing pod sandbox from storage: 6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.098352845Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:08.098371585Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=1bbc1c13-8681-4f80-b387-1e6d295bd432 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:08.098552 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:08.098592 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:08.098616 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:48:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:08.098666 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(6a65d207e737b87dbd6aed404ebdb301ba17827505981aa3b8d3ff75640385e2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:48:08 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00106|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 adds) Jan 23 17:48:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:09.995559 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:09.995907439Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:09.995947048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.007717925Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/be99953d-6ceb-4ef9-9ff5-0f87b8b3d4fa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.007737658Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.038945061Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.038974411Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount has successfully entered the 'dead' state. Jan 23 17:48:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount has successfully entered the 'dead' state. Jan 23 17:48:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-53c9c642\x2d2260\x2d4632\x2daedf\x2db13a6fbe5511.mount has successfully entered the 'dead' state. Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.077302582Z" level=info msg="runSandbox: deleting pod ID c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b from idIndex" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.077328216Z" level=info msg="runSandbox: removing pod sandbox c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.077342245Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.077353980Z" level=info msg="runSandbox: unmounting shmPath for sandbox c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.093416306Z" level=info msg="runSandbox: removing pod sandbox from storage: c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.096288359Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.096306471Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=1ce96144-d80b-467d-b529-1f664303521f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:10.096515 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:10.096553 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:10.096576 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:10.096622 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:10.995768 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:48:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:10.995917 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.996074476Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.996112467Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.996162127Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:10.996193991Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c8356e97a11d14fa35cbfec74bb3900348577de82d972d5ec8331b63888ac48b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:11.010993323Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/a5910cf2-aadf-4d9d-8045-6636d676040a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:11.011013732Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:11.011372501Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ac91e867-7eab-4606-9d09-ee59e553890c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:11.011388920Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.035874985Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.035913959Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount has successfully entered the 'dead' state. Jan 23 17:48:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount has successfully entered the 'dead' state. Jan 23 17:48:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-189bd401\x2d8a89\x2d499b\x2d917a\x2d527a83510af1.mount has successfully entered the 'dead' state. Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.085282942Z" level=info msg="runSandbox: deleting pod ID 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97 from idIndex" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.085305147Z" level=info msg="runSandbox: removing pod sandbox 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.085318850Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.085328955Z" level=info msg="runSandbox: unmounting shmPath for sandbox 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.101450453Z" level=info msg="runSandbox: removing pod sandbox from storage: 12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.104656148Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:12.104675768Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=25ab17b9-d05d-47fa-84b6-5e878a7513be name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:12.104809 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:12.104970 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:12.104994 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:12.105048 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(12b29e5f9b5e5a29f5c52d5ef5c3c8b7b85316e30e7df563ed51080ccf721d97): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:12.996888 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:48:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:12.997384 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:48:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:14.996108 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:48:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:14.996457230Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:14.996498780Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:15.007869088Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2b239d7a-a631-4f61-aa9e-66271b335f96 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:15.007896352Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702158241Z" level=info msg="NetworkStart: stopping network for sandbox ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702369472Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/7ab69f46-09ef-4cd0-a17b-ed850389aaa4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702402885Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702413733Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702423201Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.702934908Z" level=info msg="NetworkStart: stopping network for sandbox 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.703125507Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/3c65d715-2be0-496e-8ecd-99b9a50e7940 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.703154564Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.703162966Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.703171005Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.706950490Z" level=info msg="NetworkStart: stopping network for sandbox 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.706972089Z" level=info msg="NetworkStart: stopping network for sandbox cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707077098Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/d330b8ef-4e6a-42de-a35b-dd575bd03c6c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707099604Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707106459Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707112762Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707144966Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/825ff599-dd9a-4044-9275-e48e82425f84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707167716Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707174702Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.707181586Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.709892773Z" level=info msg="NetworkStart: stopping network for sandbox 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.710004683Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/5f293b17-0b89-4f11-96fb-7d19038a9057 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.710029087Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.710036126Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:16.710042437Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.034435185Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.034477383Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount has successfully entered the 'dead' state. Jan 23 17:48:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount has successfully entered the 'dead' state. Jan 23 17:48:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fd930797\x2d666c\x2d4322\x2dab47\x2d6ee5c46d66da.mount has successfully entered the 'dead' state. Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.092305777Z" level=info msg="runSandbox: deleting pod ID b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6 from idIndex" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.092328845Z" level=info msg="runSandbox: removing pod sandbox b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.092344165Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.092356203Z" level=info msg="runSandbox: unmounting shmPath for sandbox b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.108419069Z" level=info msg="runSandbox: removing pod sandbox from storage: b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.111668015Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:18.111687293Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=8d9aa2a4-2030-489f-8160-ecb165383126 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:18.111869 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:18.111916 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:18.111940 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:18.111985 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(b78c3bdbe805a4ccc3ae8ad19c4e5add385544123896c7e3c471b6b6cd00f6e6): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:48:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:20.996155 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:48:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:20.996495886Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:20.996533611Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:21.007723118Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/21c0c598-b9b0-41c3-8e10-0d60a63e048b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:21.007741882Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:22.995698 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:48:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:22.995833 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:48:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:22.995966 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996067890Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996088849Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996108060Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996127477Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996224743Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:22.996250105Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.018359939Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/60615e86-9143-42cd-83a2-898cc549be92 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.018579411Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.020758193Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/7c57a4fb-947f-4c20-804c-f1fc94dbf412 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.020791833Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.021562963Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/e08db8f5-b8d2-4117-8d2e-2f1664c951d6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.021584920Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.032296725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.032332624Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount has successfully entered the 'dead' state. Jan 23 17:48:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount has successfully entered the 'dead' state. Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.063301273Z" level=info msg="runSandbox: deleting pod ID 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84 from idIndex" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.063328185Z" level=info msg="runSandbox: removing pod sandbox 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.063343505Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.063357465Z" level=info msg="runSandbox: unmounting shmPath for sandbox 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.078403464Z" level=info msg="runSandbox: removing pod sandbox from storage: 01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.081189765Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:23.081215223Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=b4f63396-13c3-4e9a-876a-f18ce055fefc name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:23.081448 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:23.081496 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:23.081519 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:23.081568 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:23.996331 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:48:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:23.996842 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:48:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d696fbaf\x2d6315\x2d4c0e\x2db843\x2dab63788b8c6c.mount has successfully entered the 'dead' state. Jan 23 17:48:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-01a9895070a35854be81b8bf19c736ebae4146ac75dc0072d86f43c0a1689d84-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912901 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912920 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912927 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912934 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912945 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912954 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:27.912961 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:48:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:28.142821954Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:48:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:28.995622 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:48:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:28.995933349Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:28.995972497Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:29.007573536Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/357c8699-d8b7-4249-a8af-3467e9799fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:29.007595788Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:33.995567 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:48:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:33.995997138Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:33.996046597Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:34.007903648Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/4c7c3929-0c60-4fd2-a46f-ae156012b821 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:34.007926254Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:36.996299 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.997092084Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=5dbd754e-fc4b-43e0-bbc3-88d24082a774 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.997289026Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5dbd754e-fc4b-43e0-bbc3-88d24082a774 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.997828070Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=dc781251-acca-47d5-9276-f003175987de name=/runtime.v1.ImageService/ImageStatus Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.997928927Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dc781251-acca-47d5-9276-f003175987de name=/runtime.v1.ImageService/ImageStatus Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.998945359Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=3ec54955-450f-4bd6-bbff-3e38f934969b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:48:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:36.999027193Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope. -- Subject: Unit crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205. -- Subject: Unit crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.118274303Z" level=info msg="Created container fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=3ec54955-450f-4bd6-bbff-3e38f934969b name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.118829853Z" level=info msg="Starting container: fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" id=1215e0ed-4c08-4e32-85c8-84dddf9f227a name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.137572138Z" level=info msg="Started container" PID=182670 containerID=fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=1215e0ed-4c08-4e32-85c8-84dddf9f227a name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.142838941Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.152960923Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.152977246Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.152986493Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.163022880Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.163040391Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.163050916Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.171551791Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.171567512Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.171576391Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.179718947Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.179738774Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:48:37 hub-master-0.workload.bos2.lab conmon[182655]: conmon fcf012811a61da80c036 : container 182670 exited with status 1 Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has successfully entered the 'dead' state. Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope: Consumed 571ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope completed and consumed the indicated resources. Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope has successfully entered the 'dead' state. Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205.scope completed and consumed the indicated resources. Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.790299 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.790815 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/196.log" Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.792233 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" exitCode=1 Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.792258 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205} Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.792278 8631 scope.go:115] "RemoveContainer" containerID="4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.793196202Z" level=info msg="Removing container: 4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e" id=5d5cf105-15fc-40ff-a2f2-737f1e061b83 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:37.793320 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:48:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:37.793895 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:48:37 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-4328f38941581b57c96e7b1a4eddee1493f09d7aee1ebabe3fe9d04f36eab334-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4328f38941581b57c96e7b1a4eddee1493f09d7aee1ebabe3fe9d04f36eab334-merged.mount has successfully entered the 'dead' state. Jan 23 17:48:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:37.833067847Z" level=info msg="Removed container 4ef176b949aa2a9d0d30f0c7ea787895f7962a128a0da9d9e040dae04001395e: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=5d5cf105-15fc-40ff-a2f2-737f1e061b83 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:48:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:38.795416 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.663751321Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.663791242Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount has successfully entered the 'dead' state. Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:40.667836 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:40.668882 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:40.669376 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:48:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount has successfully entered the 'dead' state. Jan 23 17:48:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d3d4d9b1\x2d7f21\x2d438c\x2d9506\x2d0e074b855b26.mount has successfully entered the 'dead' state. Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.708356490Z" level=info msg="runSandbox: deleting pod ID c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7 from idIndex" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.708381602Z" level=info msg="runSandbox: removing pod sandbox c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.708402554Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.708417251Z" level=info msg="runSandbox: unmounting shmPath for sandbox c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.720400921Z" level=info msg="runSandbox: removing pod sandbox from storage: c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.723923906Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.723941046Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=0e91dc69-fd3e-4b76-8a76-bc5f7fdc38e4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:40.724140 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:40.724176 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:40.724199 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:40.724245 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(c65293686c76f752e38b7281206cba306965eb6d4e4068d3b4702b7ed88e6dc7): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:48:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:40.800111 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.800437650Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.800468914Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.810790847Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/9d450c73-60ea-4953-9190-27075fbd32d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:40.810818738Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:43.977042491Z" level=info msg="NetworkStart: stopping network for sandbox e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:43.977192335Z" level=info msg="Got pod network &{Name:cni-sysctl-allowlist-ds-hn42c Namespace:openshift-multus ID:e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361 UID:1f31a541-be8d-4508-96b0-75cb13604d3d NetNS:/var/run/netns/8b2a1797-80fd-4e02-9ece-7cb2ef5107a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:43.977225241Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:43.977233260Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:43.977240751Z" level=info msg="Deleting pod openshift-multus_cni-sysctl-allowlist-ds-hn42c from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.678862364Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.679056028Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount has successfully entered the 'dead' state. Jan 23 17:48:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount has successfully entered the 'dead' state. Jan 23 17:48:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-c0c44627\x2d1703\x2d4991\x2d89c2\x2de49f3e3393c2.mount has successfully entered the 'dead' state. Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.719308721Z" level=info msg="runSandbox: deleting pod ID 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1 from idIndex" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.719334402Z" level=info msg="runSandbox: removing pod sandbox 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.719348955Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.719361061Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.739437248Z" level=info msg="runSandbox: removing pod sandbox from storage: 1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.742391046Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.742408797Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=e6b49688-7610-4ad4-be72-0ab66360b714 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:51.742631 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:48:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:51.742680 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:51.742702 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:51.742753 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(1cf7afbeb82ebc30e16eaf4c6c7b1bc5658365b3760622b4688c4bcf6420bbc1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:48:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:51.821249 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.821556166Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.821586686Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.832392182Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/d7eb90bb-cc2a-415c-9318-d1c03e6f95f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:51.832411682Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:52.996864 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:48:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:48:52.997369 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:53.022299259Z" level=info msg="NetworkStart: stopping network for sandbox d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:53.022434195Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/6e4177f4-1af9-4359-a92d-adf66df0477e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:53.022457124Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:53.022463383Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:53.022472234Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:55.021651131Z" level=info msg="NetworkStart: stopping network for sandbox 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:55.021850114Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/be99953d-6ceb-4ef9-9ff5-0f87b8b3d4fa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:55.021873370Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:55.021881133Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:55.021888230Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.024802715Z" level=info msg="NetworkStart: stopping network for sandbox 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.024933633Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/a5910cf2-aadf-4d9d-8045-6636d676040a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.024958782Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.024966218Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.024975188Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.025254147Z" level=info msg="NetworkStart: stopping network for sandbox 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.025376571Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/ac91e867-7eab-4606-9d09-ee59e553890c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.025396053Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.025405068Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:48:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:56.025410952Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:48:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:48:58.145074056Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:48:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:48:58.653549 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hn42c] Jan 23 17:49:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:00.020658876Z" level=info msg="NetworkStart: stopping network for sandbox 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:00.020815626Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2b239d7a-a631-4f61-aa9e-66271b335f96 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:00.020840380Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:00.020848335Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:00.020855377Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.714322032Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.714375775Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.714348046Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.714637230Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.718639330Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.718668391Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.719006312Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.719038809Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.720893680Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.720933808Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount has successfully entered the 'dead' state. Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754313981Z" level=info msg="runSandbox: deleting pod ID 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697 from idIndex" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754340678Z" level=info msg="runSandbox: removing pod sandbox 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754355977Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754368333Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754315738Z" level=info msg="runSandbox: deleting pod ID ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c from idIndex" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754433545Z" level=info msg="runSandbox: removing pod sandbox ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754452996Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.754474439Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762344158Z" level=info msg="runSandbox: deleting pod ID cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7 from idIndex" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762375055Z" level=info msg="runSandbox: removing pod sandbox cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762346620Z" level=info msg="runSandbox: deleting pod ID 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897 from idIndex" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762420991Z" level=info msg="runSandbox: removing pod sandbox 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762437519Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762452093Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762346871Z" level=info msg="runSandbox: deleting pod ID 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1 from idIndex" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762475656Z" level=info msg="runSandbox: removing pod sandbox 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762487179Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762498778Z" level=info msg="runSandbox: unmounting shmPath for sandbox 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762389389Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.762540967Z" level=info msg="runSandbox: unmounting shmPath for sandbox cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.770452583Z" level=info msg="runSandbox: removing pod sandbox from storage: ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.771533483Z" level=info msg="runSandbox: removing pod sandbox from storage: 1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.773533992Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.773553811Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b32ce957-685f-4c9d-a352-390d1317c1af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.773791 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.773846 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.773869 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.773919 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.776824350Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.776846705Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=1c0692b3-fad1-4063-90cd-bcadbc3068af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.777034 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.777071 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.777093 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.777133 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.778413766Z" level=info msg="runSandbox: removing pod sandbox from storage: cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.778474055Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.778480736Z" level=info msg="runSandbox: removing pod sandbox from storage: 75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.781594241Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.781612608Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b80628dc-b348-4b26-ab47-585042a521b1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.781810 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.781857 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.781882 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.781928 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.784629184Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.784648954Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=511e2287-9daa-4427-a41a-1ce47d9b47dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.784860 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.784897 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.784919 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.784963 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.787606882Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.787625631Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=764c9a3a-94c0-449c-99a3-e2a356b4ee69 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.787847 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.787887 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.787908 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:01.787948 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:01.840455 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:01.840556 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:01.840648 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:01.840729 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.840745558Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.840786708Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:01.840882 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.840903182Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.840935316Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.840985422Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.841006569Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.841041529Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.841070831Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.841018699Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.841091780Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.872050443Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/441c9271-45cf-4be6-abfd-4efc5174910c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.872095605Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.872668356Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d11867a7-2000-40e5-a8e6-7a2bf309df9a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.872688585Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.874120982Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/338e7307-dda2-470f-9468-14d06d51de69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.874144139Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.875648936Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b304d734-3cf8-4439-9a5c-bdd46e5c3d8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.875671442Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.876732379Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/552e314e-47f2-4d5a-921f-b6745a8d05a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:01.876751002Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-825ff599\x2ddd9a\x2d4044\x2d9275\x2de48e82425f84.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5f293b17\x2d0b89\x2d4f11\x2d96fb\x2d7d19038a9057.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d330b8ef\x2d4e6a\x2d42de\x2da35b\x2ddd575bd03c6c.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3c65d715\x2d2be0\x2d496e\x2d8ecd\x2d99b9a50e7940.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7ab69f46\x2d09ef\x2d4cd0\x2da17b\x2ded850389aaa4.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-cee03dfc6ad40c6495d389c4d2e7af6b673b42b45523cad3da3d35680e54cad7-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1159fdcef574b96d6cdc2664e31df4589d8ef57a60d69b5848478b9060770697-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-75996070cc8b5c8c85f34aec60330ffb9fc8f27ff469b70f9b8952f28735d3d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9b540609fb81ffdc7c2b6522fb5e4b58567246224cef1beaa89e1a50164e2897-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ae9932a4b3a0e1a7db0f18f6471c92bcc1e49e5f8fa737f851af03170350b53c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:05.996729 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:49:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:05.997237 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:49:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:06.023814043Z" level=info msg="NetworkStart: stopping network for sandbox 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:06.024195386Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/21c0c598-b9b0-41c3-8e10-0d60a63e048b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:06.024227355Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:06.024234236Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:06.024240772Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.032558483Z" level=info msg="NetworkStart: stopping network for sandbox dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.032696414Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/60615e86-9143-42cd-83a2-898cc549be92 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.032717786Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.032725424Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.032734806Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.033729513Z" level=info msg="NetworkStart: stopping network for sandbox 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.033826635Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/e08db8f5-b8d2-4117-8d2e-2f1664c951d6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.033845382Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.033851868Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.033857809Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.035431313Z" level=info msg="NetworkStart: stopping network for sandbox f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.035569176Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/7c57a4fb-947f-4c20-804c-f1fc94dbf412 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.035594046Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.035600769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:08.035607397Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:14.022035911Z" level=info msg="NetworkStart: stopping network for sandbox bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:14.022183819Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/357c8699-d8b7-4249-a8af-3467e9799fb1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:14.022211892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:14.022218791Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:14.022225597Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:18.996419 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:49:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:18.996930 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:49:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:19.022503484Z" level=info msg="NetworkStart: stopping network for sandbox 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:19.022643282Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/4c7c3929-0c60-4fd2-a46f-ae156012b821 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:19.022667328Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:19.022676146Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:19.022682741Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:25.825079348Z" level=info msg="NetworkStart: stopping network for sandbox 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:25.825253909Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/9d450c73-60ea-4953-9190-27075fbd32d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:25.825279466Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:25.825287431Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:25.825294775Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913516 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913554 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913562 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913569 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913575 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913582 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:27.913590 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:49:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:28.141691885Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:49:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:28.989039935Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_cni-sysctl-allowlist-ds-hn42c_openshift-multus_1f31a541-be8d-4508-96b0-75cb13604d3d_0(e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361): error removing pod openshift-multus_cni-sysctl-allowlist-ds-hn42c from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hn42c/1f31a541-be8d-4508-96b0-75cb13604d3d]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:28.989074342Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount has successfully entered the 'dead' state. Jan 23 17:49:29 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount has successfully entered the 'dead' state. Jan 23 17:49:29 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8b2a1797\x2d80fd\x2d4e02\x2d9ece\x2d7cb2ef5107a4.mount has successfully entered the 'dead' state. Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.038315161Z" level=info msg="runSandbox: deleting pod ID e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361 from idIndex" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.038343499Z" level=info msg="runSandbox: removing pod sandbox e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.038357149Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.038372613Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.058445039Z" level=info msg="runSandbox: removing pod sandbox from storage: e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.061865127Z" level=info msg="runSandbox: releasing container name: k8s_POD_cni-sysctl-allowlist-ds-hn42c_openshift-multus_1f31a541-be8d-4508-96b0-75cb13604d3d_0" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:29.061883851Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_cni-sysctl-allowlist-ds-hn42c_openshift-multus_1f31a541-be8d-4508-96b0-75cb13604d3d_0" id=64eb1e57-7fb6-4bd7-9658-4747df3bc127 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:29.062123 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-hn42c_openshift-multus_1f31a541-be8d-4508-96b0-75cb13604d3d_0(e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361): error adding pod openshift-multus_cni-sysctl-allowlist-ds-hn42c to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hn42c/1f31a541-be8d-4508-96b0-75cb13604d3d]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:29.062308 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cni-sysctl-allowlist-ds-hn42c_openshift-multus_1f31a541-be8d-4508-96b0-75cb13604d3d_0(e5383477d8404a8e17971929eaa90ad3851511c28a80db46fb84e037ba320361): error adding pod openshift-multus_cni-sysctl-allowlist-ds-hn42c to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/cni-sysctl-allowlist-ds-hn42c/1f31a541-be8d-4508-96b0-75cb13604d3d]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/cni-sysctl-allowlist-ds-hn42c" Jan 23 17:49:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:29.996317 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:49:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:29.996830 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029131 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready\") pod \"1f31a541-be8d-4508-96b0-75cb13604d3d\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029159 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzss\" (UniqueName: \"kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss\") pod \"1f31a541-be8d-4508-96b0-75cb13604d3d\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029186 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist\") pod \"1f31a541-be8d-4508-96b0-75cb13604d3d\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029203 8631 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir\") pod \"1f31a541-be8d-4508-96b0-75cb13604d3d\" (UID: \"1f31a541-be8d-4508-96b0-75cb13604d3d\") " Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029290 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "1f31a541-be8d-4508-96b0-75cb13604d3d" (UID: "1f31a541-be8d-4508-96b0-75cb13604d3d"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:49:30.029407 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1f31a541-be8d-4508-96b0-75cb13604d3d/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: W0123 17:49:30.029434 8631 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1f31a541-be8d-4508-96b0-75cb13604d3d/volumes/kubernetes.io~empty-dir/ready: clearQuota called, but quotas disabled Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029464 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready" (OuterVolumeSpecName: "ready") pod "1f31a541-be8d-4508-96b0-75cb13604d3d" (UID: "1f31a541-be8d-4508-96b0-75cb13604d3d"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.029531 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "1f31a541-be8d-4508-96b0-75cb13604d3d" (UID: "1f31a541-be8d-4508-96b0-75cb13604d3d"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:49:30 hub-master-0.workload.bos2.lab systemd[1]: var-lib-kubelet-pods-1f31a541\x2dbe8d\x2d4508\x2d96b0\x2d75cb13604d3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxzss.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-kubelet-pods-1f31a541\x2dbe8d\x2d4508\x2d96b0\x2d75cb13604d3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxzss.mount has successfully entered the 'dead' state. Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.037748 8631 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss" (OuterVolumeSpecName: "kube-api-access-pxzss") pod "1f31a541-be8d-4508-96b0-75cb13604d3d" (UID: "1f31a541-be8d-4508-96b0-75cb13604d3d"). InnerVolumeSpecName "kube-api-access-pxzss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.129499 8631 reconciler.go:399] "Volume detached for volume \"kube-api-access-pxzss\" (UniqueName: \"kubernetes.io/projected/1f31a541-be8d-4508-96b0-75cb13604d3d-kube-api-access-pxzss\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.129517 8631 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f31a541-be8d-4508-96b0-75cb13604d3d-cni-sysctl-allowlist\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.129526 8631 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f31a541-be8d-4508-96b0-75cb13604d3d-tuning-conf-dir\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.129535 8631 reconciler.go:399] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f31a541-be8d-4508-96b0-75cb13604d3d-ready\") on node \"hub-master-0.workload.bos2.lab\" DevicePath \"\"" Jan 23 17:49:30 hub-master-0.workload.bos2.lab systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice. -- Subject: Unit kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice has finished shutting down. Jan 23 17:49:30 hub-master-0.workload.bos2.lab systemd[1]: kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit kubepods-besteffort-pod1f31a541_be8d_4508_96b0_75cb13604d3d.slice completed and consumed the indicated resources. Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.906652 8631 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hn42c] Jan 23 17:49:30 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:30.910740 8631 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/cni-sysctl-allowlist-ds-hn42c] Jan 23 17:49:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:31.999544 8631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1f31a541-be8d-4508-96b0-75cb13604d3d path="/var/lib/kubelet/pods/1f31a541-be8d-4508-96b0-75cb13604d3d/volumes" Jan 23 17:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:36.846156672Z" level=info msg="NetworkStart: stopping network for sandbox 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:36.846307786Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/d7eb90bb-cc2a-415c-9318-d1c03e6f95f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:36.846332577Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:36.846338985Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:36.846344972Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.034128654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.034354911Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount has successfully entered the 'dead' state. Jan 23 17:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount has successfully entered the 'dead' state. Jan 23 17:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6e4177f4\x2d1af9\x2d4359\x2da92d\x2dadf66df0477e.mount has successfully entered the 'dead' state. Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.083348244Z" level=info msg="runSandbox: deleting pod ID d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c from idIndex" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.083378637Z" level=info msg="runSandbox: removing pod sandbox d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.083391213Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.083401935Z" level=info msg="runSandbox: unmounting shmPath for sandbox d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.095432807Z" level=info msg="runSandbox: removing pod sandbox from storage: d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.099080281Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:38.099097444Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=e17b44da-d7c4-427d-a843-1c343b03e27f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:38.099282 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:38.099328 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:38.099350 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:49:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:38.099395 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(d9416a48040d0a703f761655c7dde6b30e0382e7922dfcddd9e4531abb0cdd9c): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1324] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1329] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1329] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1331] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1336] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:49:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496178.1339] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:49:39 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496179.9495] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.033358029Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.033394823Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount has successfully entered the 'dead' state. Jan 23 17:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount has successfully entered the 'dead' state. Jan 23 17:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-netns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-be99953d\x2d6ceb\x2d4ef9\x2d9ff5\x2d0f87b8b3d4fa.mount has successfully entered the 'dead' state. Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.069305202Z" level=info msg="runSandbox: deleting pod ID 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323 from idIndex" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.069329076Z" level=info msg="runSandbox: removing pod sandbox 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.069342609Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.069356160Z" level=info msg="runSandbox: unmounting shmPath for sandbox 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.081454962Z" level=info msg="runSandbox: removing pod sandbox from storage: 390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.085079833Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:40.085097075Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=f412cfc0-8909-4918-bd0f-55840303e19e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:40.085246 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:40.085291 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:40.085315 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:40.085365 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(390d3234d7a4f2201895991b44438b6874036cc2f64cde31b7f21d8171a90323): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:49:40 hub-master-0.workload.bos2.lab ovs-vswitchd[3146]: ovs|00107|connmgr|INFO|br-int<->unix#2: 10 flow_mods 10 s ago (10 deletes) Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:40.996711 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:49:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:40.997245 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.036100273Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.036144988Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.036268678Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.036300826Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount has successfully entered the 'dead' state. Jan 23 17:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount has successfully entered the 'dead' state. Jan 23 17:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount has successfully entered the 'dead' state. Jan 23 17:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount has successfully entered the 'dead' state. Jan 23 17:49:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-ac91e867\x2d7eab\x2d4606\x2d9d09\x2dee59e553890c.mount has successfully entered the 'dead' state. Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.080305052Z" level=info msg="runSandbox: deleting pod ID 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b from idIndex" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.080330519Z" level=info msg="runSandbox: removing pod sandbox 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.080346540Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.080358802Z" level=info msg="runSandbox: unmounting shmPath for sandbox 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.081421991Z" level=info msg="runSandbox: deleting pod ID 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4 from idIndex" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.081444776Z" level=info msg="runSandbox: removing pod sandbox 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.081455848Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.081469210Z" level=info msg="runSandbox: unmounting shmPath for sandbox 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.088415643Z" level=info msg="runSandbox: removing pod sandbox from storage: 540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.095468257Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.095498769Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=ad27d9ec-3b36-48d6-9834-77635218cc44 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.095645 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.095707 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.095745 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.095826 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.096402562Z" level=info msg="runSandbox: removing pod sandbox from storage: 162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.099695247Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:41.099718820Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=009abcf3-0cf3-4b53-ad40-d06dfd4d6161 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.099923 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.099959 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.099980 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:49:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:41.100036 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:49:42 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a5910cf2\x2daadf\x2d4d9d\x2d8045\x2d6636d676040a.mount has successfully entered the 'dead' state. Jan 23 17:49:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-162f37b108a4b8bf7ed59ffa927fff2617e547d371efbae157288d59b445085b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:42 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-540b57ada79751ce92b3f2adf4dd416acb1a7d1167827099ce53fee2cc5cc2b4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.031833566Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.031873695Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount has successfully entered the 'dead' state. Jan 23 17:49:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount has successfully entered the 'dead' state. Jan 23 17:49:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2b239d7a\x2da631\x2d4f61\x2daa9e\x2d66271b335f96.mount has successfully entered the 'dead' state. Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.072285716Z" level=info msg="runSandbox: deleting pod ID 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e from idIndex" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.072317214Z" level=info msg="runSandbox: removing pod sandbox 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.072333703Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.072346281Z" level=info msg="runSandbox: unmounting shmPath for sandbox 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.084598722Z" level=info msg="runSandbox: removing pod sandbox from storage: 45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.088045712Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:45.088063836Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=5204e003-b736-4d67-9e1b-afd90e19e7ae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:45.088298 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:45.088452 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:49:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:45.088477 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:49:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:45.088528 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(45f25bc32cd9343513cfc789dcce7171f8569e9f795278c0ffaca80baad9dc7e): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.886474298Z" level=info msg="NetworkStart: stopping network for sandbox e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.886613472Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/441c9271-45cf-4be6-abfd-4efc5174910c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.886635524Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.886641622Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.886648132Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.887755222Z" level=info msg="NetworkStart: stopping network for sandbox c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.887856717Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/d11867a7-2000-40e5-a8e6-7a2bf309df9a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.887876102Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.887882507Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.887887747Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.888801159Z" level=info msg="NetworkStart: stopping network for sandbox f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.888947040Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/338e7307-dda2-470f-9468-14d06d51de69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.888973544Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.888981297Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.888987549Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.889600877Z" level=info msg="NetworkStart: stopping network for sandbox f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.889708674Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/b304d734-3cf8-4439-9a5c-bdd46e5c3d8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.889728630Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.889735299Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.889743158Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.890778402Z" level=info msg="NetworkStart: stopping network for sandbox 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.890949443Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/552e314e-47f2-4d5a-921f-b6745a8d05a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.890982680Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.890994006Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:49:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:46.891003846Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:49.995704 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:49:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:49.996039489Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:49.996077555Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:50.008164147Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/410406f7-4804-4606-97ec-1ea0d079a498 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:50.008219813Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:50.996426 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:50.996747419Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:50.996781738Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.007762076Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/fae5b7ba-330c-418d-9fb9-c5ed55e7afdc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.007784951Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.034934535Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.034965629Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount has successfully entered the 'dead' state. Jan 23 17:49:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount has successfully entered the 'dead' state. Jan 23 17:49:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-21c0c598\x2db9b0\x2d41c3\x2d8e10\x2d0d60a63e048b.mount has successfully entered the 'dead' state. Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.080278689Z" level=info msg="runSandbox: deleting pod ID 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05 from idIndex" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.080302651Z" level=info msg="runSandbox: removing pod sandbox 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.080317220Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.080328338Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.095449995Z" level=info msg="runSandbox: removing pod sandbox from storage: 7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.098287609Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:51.098305049Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=0fa74549-4e91-40b3-b64a-f6a9d77354c7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:51.098530 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:51.098571 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:49:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:51.098593 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:49:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:51.098643 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(7c59f5a7862f69976bbf1f6808c26c7b9b2c00078d97936c4bfab5b85574ae05): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:49:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:52.996075 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:52.996442160Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:52.996479649Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.007606190Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/09ab84a4-7278-47e8-b99b-dfd9a68d480f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.007633575Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.043707138Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.043738836Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.043743850Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.043828080Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.046537147Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.046578611Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount has successfully entered the 'dead' state. Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.091301765Z" level=info msg="runSandbox: deleting pod ID 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5 from idIndex" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.091328616Z" level=info msg="runSandbox: removing pod sandbox 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.091342147Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.091352983Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.092304917Z" level=info msg="runSandbox: deleting pod ID dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105 from idIndex" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.092524365Z" level=info msg="runSandbox: removing pod sandbox dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.092536846Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.092547741Z" level=info msg="runSandbox: unmounting shmPath for sandbox dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.099279156Z" level=info msg="runSandbox: deleting pod ID f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d from idIndex" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.099305487Z" level=info msg="runSandbox: removing pod sandbox f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.099321467Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.099334704Z" level=info msg="runSandbox: unmounting shmPath for sandbox f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.107473370Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.107473272Z" level=info msg="runSandbox: removing pod sandbox from storage: dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.110332700Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.110350109Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=28f9e4fc-07dc-4a7e-8ec3-375f3a71d592 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.110523 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.110563 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.110586 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.110636 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.111430687Z" level=info msg="runSandbox: removing pod sandbox from storage: f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.113316614Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.113335107Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=b7605055-4a9e-4097-8a60-d4924bcfef6a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.113534 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.113589 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.113628 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.113691 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.116632036Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:53.116653938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=feddbfa6-a409-4fe7-8056-2884101efd5c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.116843 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.116878 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.116900 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:53.116940 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e08db8f5\x2db8d2\x2d4117\x2d8d2e\x2d2f1664c951d6.mount has successfully entered the 'dead' state. Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7c57a4fb\x2d947f\x2d4c20\x2d804c\x2df1fc94dbf412.mount has successfully entered the 'dead' state. Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-netns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-60615e86\x2d9143\x2d42cd\x2d83a2\x2d898cc549be92.mount has successfully entered the 'dead' state. Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f49a09f4e51289dc83576b4fcfa6075fe4db270ab56e78907f66b23c9d121e7d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8d91aea6c930217fe60643e5bf23fa60545425de600a83a43b9675fe2270a9c5-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:54 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dfac61c0f8f7fde09d0fb96c7ae5042a7224932eb1f463c0bda99bf3ef952105-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:55.996232 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:55.996544606Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:55.996588563Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:55.996877 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:49:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:55.997510 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:49:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:56.007349695Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bd9be786-5bd3-44bd-af21-66af292dcd27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:56.007373943Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:56 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:49:56.996114 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:49:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:56.996393938Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:56.996439989Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:57.006882967Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/a13e0d48-67c8-4ad2-958f-7f2bab2c3288 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:49:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:57.006903386Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:49:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:58.142894536Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.033166216Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.033212090Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount has successfully entered the 'dead' state. Jan 23 17:49:59 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount has successfully entered the 'dead' state. Jan 23 17:49:59 hub-master-0.workload.bos2.lab systemd[1]: run-netns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-357c8699\x2dd8b7\x2d4249\x2da8af\x2d3467e9799fb1.mount has successfully entered the 'dead' state. Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.090309972Z" level=info msg="runSandbox: deleting pod ID bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249 from idIndex" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.090333463Z" level=info msg="runSandbox: removing pod sandbox bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.090347414Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.090362239Z" level=info msg="runSandbox: unmounting shmPath for sandbox bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.106438952Z" level=info msg="runSandbox: removing pod sandbox from storage: bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.109283577Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:49:59.109302160Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=2ff67d84-5db4-4af3-9bda-5d9707d2f7e0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:49:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:59.109516 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:49:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:59.109565 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:59.109589 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:49:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:49:59.109638 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(bf66c89b37c003c23eccbe24f730ea5988012c1876ff9a41c82d6d511ce23249): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:50:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:03.995735 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:50:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:03.996034255Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:03.996073040Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.007511712Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/cdc1c143-3ca3-48b7-adba-e285df4603ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.007533998Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.033399822Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.033427439Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount has successfully entered the 'dead' state. Jan 23 17:50:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount has successfully entered the 'dead' state. Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.070308925Z" level=info msg="runSandbox: deleting pod ID 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc from idIndex" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.070332659Z" level=info msg="runSandbox: removing pod sandbox 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.070345758Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.070358223Z" level=info msg="runSandbox: unmounting shmPath for sandbox 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.086429340Z" level=info msg="runSandbox: removing pod sandbox from storage: 863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.089288123Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.089306966Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=d9491b09-4762-4503-9a6a-751499f287c4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:04.089508 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:04.089546 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:50:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:04.089570 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:50:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:04.089615 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:50:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4c7c3929\x2d0c60\x2d4fd2\x2da46f\x2dae156012b821.mount has successfully entered the 'dead' state. Jan 23 17:50:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-863437112b248c4d0c3faf121432dbabea1e126320662d3c8cdaa392bccfeafc-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:04.995586 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.995893058Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:04.995923402Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:05.007040455Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/411caa0c-56b1-499c-8e10-5a63ebe6173e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:05.007064296Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:06.995962 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:50:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:06.996295331Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:06.996333738Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:07.006963961Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/a5b329c2-0a7a-4d70-8f8c-bbae7dff3748 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:07.006983014Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:07.996272 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:50:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:07.996617442Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:07.996657598Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:08.007377161Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/3ca76a09-99b3-42a3-9a84-295fed57b63a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:08.007395132Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:09.996959 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:50:09 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:09.997487 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.835977528Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.836023824Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount has successfully entered the 'dead' state. Jan 23 17:50:10 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount has successfully entered the 'dead' state. Jan 23 17:50:10 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9d450c73\x2d60ea\x2d4953\x2d9190\x2d27075fbd32d7.mount has successfully entered the 'dead' state. Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.889388993Z" level=info msg="runSandbox: deleting pod ID 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d from idIndex" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.889418161Z" level=info msg="runSandbox: removing pod sandbox 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.889437376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.889457497Z" level=info msg="runSandbox: unmounting shmPath for sandbox 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.897430092Z" level=info msg="runSandbox: removing pod sandbox from storage: 66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.900292186Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.900311673Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=a1d1973c-36bf-4bf7-8a2a-233a42880b1e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:10.900557 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:10.900610 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:10.900637 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:10.900695 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(66a5d466abad1e3a319096d2883f8ae84824a48940f539f095a59c76cae3676d): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:50:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:10.978927 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.979243456Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.979288218Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.990145062Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a50b74e1-831c-4eba-bf0d-d802e3fda6c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:10.990167778Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.802594 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-hub-master-0.workload.bos2.lab_b8e918bfaafef0fc7d13026942c43171/kube-controller-manager/3.log" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.803386 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-hub-master-0.workload.bos2.lab_77321459d336b7d15305c9b9a83e4081/kube-scheduler/3.log" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.812377 8631 logs.go:405] "Finished parsing log file, hit bytes limit" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-hub-master-0.workload.bos2.lab_77321459d336b7d15305c9b9a83e4081/kube-scheduler-cert-syncer/3.log" limit=65536 Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.815924 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-hub-master-0.workload.bos2.lab_b8e918bfaafef0fc7d13026942c43171/kube-controller-manager-cert-syncer/3.log" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.819158 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-hub-master-0.workload.bos2.lab_77321459d336b7d15305c9b9a83e4081/kube-scheduler-recovery-controller/3.log" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.824263 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-hub-master-0.workload.bos2.lab_77321459d336b7d15305c9b9a83e4081/wait-for-host-port/3.log" Jan 23 17:50:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:12.838613 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-hub-master-0.workload.bos2.lab_b8e918bfaafef0fc7d13026942c43171/kube-controller-manager-recovery-controller/3.log" Jan 23 17:50:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:13.995585 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:50:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:13.995892827Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:13.995942000Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:14.006667867Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/7127cdf4-7637-4aac-81b1-996579788f15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:14.006688756Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:18.995473 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:50:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:18.995806950Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:18.995846689Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:19.006675789Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d340b6fd-e178-4a09-ae57-ae75ec7debd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:19.006696313Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.857596390Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.857828037Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount has successfully entered the 'dead' state. Jan 23 17:50:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount has successfully entered the 'dead' state. Jan 23 17:50:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d7eb90bb\x2dcc2a\x2d415c\x2d9318\x2dd1c03e6f95f2.mount has successfully entered the 'dead' state. Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.902280930Z" level=info msg="runSandbox: deleting pod ID 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960 from idIndex" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.902307401Z" level=info msg="runSandbox: removing pod sandbox 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.902322221Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.902335703Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.914447005Z" level=info msg="runSandbox: removing pod sandbox from storage: 7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.917412918Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:21.917431494Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=01b1e727-1091-45ba-bb51-1ba9447af894 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:21.917637 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:21.917693 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:21.917719 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:21.917765 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(7f2c66b92d6b53c63839d2ae6288a1ff0573ba8b885f76121e0b14f7584d1960): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:50:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:22.000972 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:50:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:22.001152329Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:22.001183205Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:22.012266352Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/00770331-f484-4d50-8cf4-62dad4c0e6d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:22.012285269Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:22.822891 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-2j9w6_5ced4aec-1711-4abf-825a-c546047148b7/node-ca/3.log" Jan 23 17:50:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:23.996650 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:50:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:23.997198 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:50:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:24.222625 8631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5040f1c4b8e68c6ec2cf8c9549e66445a07d2b333be76d95b65968f5df10372\": container with ID starting with e5040f1c4b8e68c6ec2cf8c9549e66445a07d2b333be76d95b65968f5df10372 not found: ID does not exist" containerID="e5040f1c4b8e68c6ec2cf8c9549e66445a07d2b333be76d95b65968f5df10372" Jan 23 17:50:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:24.222676 8631 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Jan 23 17:50:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:24.422584 8631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f3215ee5eb0394947991d867c5d959bc72491150ad03ed175ef084d30b16c8\": container with ID starting with 38f3215ee5eb0394947991d867c5d959bc72491150ad03ed175ef084d30b16c8 not found: ID does not exist" containerID="38f3215ee5eb0394947991d867c5d959bc72491150ad03ed175ef084d30b16c8" Jan 23 17:50:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:24.422623 8631 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.222592 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/setup/3.log" Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.422983 8631 logs.go:405] "Finished parsing log file, hit bytes limit" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/kube-apiserver/3.log" limit=39945 Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.623010 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/kube-apiserver-cert-syncer/3.log" Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.822352 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/kube-apiserver-cert-regeneration-controller/3.log" Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914524 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914540 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914550 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914558 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914564 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914570 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:27.914576 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:50:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:27.915940467Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=9540c1eb-4b56-43e8-b313-4f07c8e2d6a5 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:50:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:27.916070480Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9540c1eb-4b56-43e8-b313-4f07c8e2d6a5 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:50:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:28.022359 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/kube-apiserver-insecure-readyz/3.log" Jan 23 17:50:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:28.147240717Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:50:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:28.223773 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-hub-master-0.workload.bos2.lab_9552ff413d8390655360ce968177c622/kube-apiserver-check-endpoints/3.log" Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.897697077Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.897752490Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.898161432Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.898213473Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.900248257Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.900297265Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.901299437Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.901341069Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.902518950Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.902546975Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount has successfully entered the 'dead' state. Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942301680Z" level=info msg="runSandbox: deleting pod ID f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4 from idIndex" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942346292Z" level=info msg="runSandbox: removing pod sandbox f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942309974Z" level=info msg="runSandbox: deleting pod ID e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb from idIndex" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942388402Z" level=info msg="runSandbox: removing pod sandbox e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942407024Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942425014Z" level=info msg="runSandbox: unmounting shmPath for sandbox f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942492055Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.942516663Z" level=info msg="runSandbox: unmounting shmPath for sandbox e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.943393576Z" level=info msg="runSandbox: deleting pod ID c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9 from idIndex" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.943420987Z" level=info msg="runSandbox: removing pod sandbox c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.943434501Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.943448189Z" level=info msg="runSandbox: unmounting shmPath for sandbox c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.946275880Z" level=info msg="runSandbox: deleting pod ID f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490 from idIndex" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.946300721Z" level=info msg="runSandbox: removing pod sandbox f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.946312988Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.946325931Z" level=info msg="runSandbox: unmounting shmPath for sandbox f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.947272665Z" level=info msg="runSandbox: deleting pod ID 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c from idIndex" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.947298078Z" level=info msg="runSandbox: removing pod sandbox 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.947312247Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.947327504Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.954450249Z" level=info msg="runSandbox: removing pod sandbox from storage: c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.957909188Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.957929268Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=50f65825-3e3b-430b-8b0d-c92e2dfe18b9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.958143 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.958357 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.958379 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.958421 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.959487042Z" level=info msg="runSandbox: removing pod sandbox from storage: f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.959487803Z" level=info msg="runSandbox: removing pod sandbox from storage: e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.963553757Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.963576339Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=20cb356f-9ea5-49a0-b400-6f76d457ea31 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.963823 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.963862 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.963887 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.963934 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.966915953Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.966935429Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=a28ec4b3-495e-4235-b73d-33a5aabf58de name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.967149 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.967181 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.967203 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.967248 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.967567320Z" level=info msg="runSandbox: removing pod sandbox from storage: f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.967568958Z" level=info msg="runSandbox: removing pod sandbox from storage: 8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.971086228Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.971104908Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=09d85152-27b4-49a1-91f2-cb49fb112264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.971336 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.971370 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.971394 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.971443 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.974800680Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:31.974825126Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=328249a1-c27d-4174-9748-b4ab2d10a425 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.975035 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.975071 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.975091 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:50:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:31.975130 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.019752 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.019927 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.019985 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.020011 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020164062Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020213816Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.020211 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020415224Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020449027Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020518334Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020554466Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020637190Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020660520Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020666477Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.020684640Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:32.022136 8631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b209f037bd92b58aa083fe584cd23857e585c6f3a7391e12953adfd2cfc4c3\": container with ID starting with 37b209f037bd92b58aa083fe584cd23857e585c6f3a7391e12953adfd2cfc4c3 not found: ID does not exist" containerID="37b209f037bd92b58aa083fe584cd23857e585c6f3a7391e12953adfd2cfc4c3" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.022166 8631 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.045828242Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/88716d4a-3c14-46be-9b40-7252445b4d5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.045851759Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.046158840Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9e7cd303-4751-436f-9771-0732d21ac956 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.046181852Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.055329614Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6e6bd90-1a5e-421b-beb4-1f308628f46c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.055357312Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.056003408Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cabf909a-e3ae-4c39-b663-2b9ee1e48774 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.056027018Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.058875669Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7fd3b44c-f105-4c7d-8f51-08f543d0b4bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:32.058899001Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:32.622283 8631 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af308a866fe6e5386b2ddc085406e5c6d8997f19c3a8706b1d5d8bac1b46bc7\": container with ID starting with 3af308a866fe6e5386b2ddc085406e5c6d8997f19c3a8706b1d5d8bac1b46bc7 not found: ID does not exist" containerID="3af308a866fe6e5386b2ddc085406e5c6d8997f19c3a8706b1d5d8bac1b46bc7" Jan 23 17:50:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:32.622330 8631 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-552e314e\x2d47f2\x2d4d5a\x2d921f\x2db6745a8d05a4.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b304d734\x2d3cf8\x2d4439\x2d9a5c\x2dbdd46e5c3d8c.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-338e7307\x2ddda2\x2d470f\x2d9468\x2d14d06d51de69.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d11867a7\x2d2000\x2d40e5\x2da8e6\x2d7a2bf309df9a.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-441c9271\x2d45cf\x2d4be6\x2dabfd\x2d4efc5174910c.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c54c6e9be74297b20e7993302213e0cfd4c5c08cdca5d8be2e58ffd093ea0cb9-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8291f1a3dd3b215b5fd4c2d6eb6099d059f24e7482f5e6c9db98ff4cd0f74d3c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f246b79da53a66e85b87660debe68d4177906c995aeef2fc56fbffb7563e7490-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f2662ccba514dbade23b4ba057deff7b99b66b0b99914072c3d0dc7e9a2b65f4-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-e16c93f3f71fb4ab34d5a6a276600e8f98902ba3b917b1cb220bf06095173bcb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:35.022335067Z" level=info msg="NetworkStart: stopping network for sandbox 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:35.022508812Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/410406f7-4804-4606-97ec-1ea0d079a498 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:35.022536813Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:35.022544320Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:35.022551376Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:35.996218 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:50:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:35.996778 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:36.021703442Z" level=info msg="NetworkStart: stopping network for sandbox 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:36.021842907Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/fae5b7ba-330c-418d-9fb9-c5ed55e7afdc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:36.021866614Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:36.021873651Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:36.021880166Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:38.021097902Z" level=info msg="NetworkStart: stopping network for sandbox 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:38.021283987Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/09ab84a4-7278-47e8-b99b-dfd9a68d480f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:38.021315291Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:38.021322962Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:38.021329946Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:40.823140 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-pbh26_ff6a907c-8dc5-4524-b928-d97ba7b430c3/init-textfile/3.log" Jan 23 17:50:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:41.021783899Z" level=info msg="NetworkStart: stopping network for sandbox 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:41.021925580Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/bd9be786-5bd3-44bd-af21-66af292dcd27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:41.021948235Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:41.021955211Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:41.021963442Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:41.024405 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-pbh26_ff6a907c-8dc5-4524-b928-d97ba7b430c3/node-exporter/3.log" Jan 23 17:50:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:41.222929 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-pbh26_ff6a907c-8dc5-4524-b928-d97ba7b430c3/kube-rbac-proxy/3.log" Jan 23 17:50:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:42.020370283Z" level=info msg="NetworkStart: stopping network for sandbox c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:42.020523445Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/a13e0d48-67c8-4ad2-958f-7f2bab2c3288 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:42.020546442Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:42.020552896Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:42.020559045Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:49.021244016Z" level=info msg="NetworkStart: stopping network for sandbox 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:49.021614282Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/cdc1c143-3ca3-48b7-adba-e285df4603ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:49.021637543Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:49.021645455Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:49.021652021Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:50.020807115Z" level=info msg="NetworkStart: stopping network for sandbox 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:50.020938046Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/411caa0c-56b1-499c-8e10-5a63ebe6173e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:50.020961608Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:50.020967996Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:50.020973667Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:50:50.996438 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:50:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:50:50.996954 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:50:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:52.019674529Z" level=info msg="NetworkStart: stopping network for sandbox 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:52.019814096Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/a5b329c2-0a7a-4d70-8f8c-bbae7dff3748 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:52.019838461Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:52.019844880Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:52.019851109Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:53.020696943Z" level=info msg="NetworkStart: stopping network for sandbox 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:53.020834088Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/3ca76a09-99b3-42a3-9a84-295fed57b63a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:53.020855028Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:53.020864720Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:53.020871200Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:56.004093571Z" level=info msg="NetworkStart: stopping network for sandbox fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:56.004324754Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/a50b74e1-831c-4eba-bf0d-d802e3fda6c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:56.004353346Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:56.004360966Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:56.004368681Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:50:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:58.142622712Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:50:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:59.021405751Z" level=info msg="NetworkStart: stopping network for sandbox f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:50:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:59.021564760Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/7127cdf4-7637-4aac-81b1-996579788f15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:50:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:59.021592438Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:50:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:59.021599712Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:50:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:50:59.021606630Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:02.483047 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/egress-router-binary-copy/3.log" Jan 23 17:51:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:02.488096 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/cni-plugins/3.log" Jan 23 17:51:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:02.493002 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/bond-cni-plugin/3.log" Jan 23 17:51:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:02.652370 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/routeoverride-cni/3.log" Jan 23 17:51:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:02.853456 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/whereabouts-cni-bincopy/3.log" Jan 23 17:51:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:03.053606 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/whereabouts-cni/3.log" Jan 23 17:51:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:03.252553 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7ks6h_94cb9be9-32f4-413c-9fdf-a6e9307ff410/kube-multus-additional-cni-plugins/3.log" Jan 23 17:51:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:03.996369 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:51:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:03.996905 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:51:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:04.019754287Z" level=info msg="NetworkStart: stopping network for sandbox 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:04.019890633Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d340b6fd-e178-4a09-ae57-ae75ec7debd9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:04.019915776Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:04.019922080Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:04.019927968Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:07.025589829Z" level=info msg="NetworkStart: stopping network for sandbox 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:07.025733128Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/00770331-f484-4d50-8cf4-62dad4c0e6d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:07.025757451Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:07.025764754Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:07.025773330Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:07.053649 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cdt6c_b6c2cdc5-967e-4062-b6e6-f6cf372cc21c/kube-multus/128.log" Jan 23 17:51:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:07.251531 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cdt6c_b6c2cdc5-967e-4062-b6e6-f6cf372cc21c/kube-multus/129.log" Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1190] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1195] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1196] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1212] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1213] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1226] policy: auto-activating connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1230] device (eno12409): Activation: starting connection 'Wired Connection' (8105e4a7-d75c-4c11-b250-7d472ed203fe) Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1230] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1232] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1235] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:51:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496268.1240] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:51:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496270.0344] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:10.652237 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/northd/3.log" Jan 23 17:51:10 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:10.853632 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/nbdb/3.log" Jan 23 17:51:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:11.054520 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/kube-rbac-proxy/3.log" Jan 23 17:51:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:11.252677 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/sbdb/3.log" Jan 23 17:51:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:11.452784 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/ovnkube-master/3.log" Jan 23 17:51:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:11.652635 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-master-fld8m_a88a1018-cc7c-4bd1-b3d2-0d960b53459c/ovn-dbchecker/3.log" Jan 23 17:51:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:14.996979 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:51:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:14.997520 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:15.054258 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:15.252670 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovn-controller/1.log" Jan 23 17:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:15.452895 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovn-acl-logging/1.log" Jan 23 17:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:15.652794 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/kube-rbac-proxy/1.log" Jan 23 17:51:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:15.853523 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/kube-rbac-proxy-ovn-metrics/1.log" Jan 23 17:51:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:16.053374 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.059266316Z" level=info msg="NetworkStart: stopping network for sandbox 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.059432995Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/88716d4a-3c14-46be-9b40-7252445b4d5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.059458272Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.059466884Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.059474104Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.064728851Z" level=info msg="NetworkStart: stopping network for sandbox dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.064871715Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09 UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/9e7cd303-4751-436f-9771-0732d21ac956 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.064895180Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.064901942Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.064908475Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069551770Z" level=info msg="NetworkStart: stopping network for sandbox 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069664015Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/b6e6bd90-1a5e-421b-beb4-1f308628f46c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069686621Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069694356Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069701348Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.069960098Z" level=info msg="NetworkStart: stopping network for sandbox 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.070072587Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/cabf909a-e3ae-4c39-b663-2b9ee1e48774 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.070093934Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.070105205Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.070111375Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.071130084Z" level=info msg="NetworkStart: stopping network for sandbox f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.071249934Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/7fd3b44c-f105-4c7d-8f51-08f543d0b4bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.071273258Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.071280934Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:51:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:17.071287873Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.034335444Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.034383200Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount has successfully entered the 'dead' state. Jan 23 17:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount has successfully entered the 'dead' state. Jan 23 17:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-netns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-410406f7\x2d4804\x2d4606\x2d97ec\x2d1ea0d079a498.mount has successfully entered the 'dead' state. Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.071340113Z" level=info msg="runSandbox: deleting pod ID 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d from idIndex" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.071472850Z" level=info msg="runSandbox: removing pod sandbox 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.071491071Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.071505402Z" level=info msg="runSandbox: unmounting shmPath for sandbox 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.087485559Z" level=info msg="runSandbox: removing pod sandbox from storage: 04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.090498013Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:20.090516871Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=a422fb6b-0801-4d0e-8ae4-4cb54154185c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:20.090775 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:20.090818 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:20.090840 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:51:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:20.090880 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(04b9ecf24fca2ee221cb13ae8dc3435c435ed63579292f75a14fe4798d470d1d): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.032228041Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.032275741Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount has successfully entered the 'dead' state. Jan 23 17:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount has successfully entered the 'dead' state. Jan 23 17:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fae5b7ba\x2d330c\x2d418d\x2d9fb9\x2dc5ed55e7afdc.mount has successfully entered the 'dead' state. Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.085415894Z" level=info msg="runSandbox: deleting pod ID 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719 from idIndex" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.085442400Z" level=info msg="runSandbox: removing pod sandbox 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.085458508Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.085478147Z" level=info msg="runSandbox: unmounting shmPath for sandbox 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.098486708Z" level=info msg="runSandbox: removing pod sandbox from storage: 95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.102334659Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:21.102353764Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=18dd3432-d4dc-447e-8282-35cf397c001d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:21.102517 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:21.102566 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:21.102591 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:21.102642 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(95f9d1147a846e583cc94ca616ae0a7bcd49451ceab6fb52243c7609c28a2719): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.032037601Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.032076151Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount has successfully entered the 'dead' state. Jan 23 17:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount has successfully entered the 'dead' state. Jan 23 17:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-netns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-09ab84a4\x2d7278\x2d47e8\x2db99b\x2ddfd9a68d480f.mount has successfully entered the 'dead' state. Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.076307935Z" level=info msg="runSandbox: deleting pod ID 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb from idIndex" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.076336129Z" level=info msg="runSandbox: removing pod sandbox 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.076350528Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.076365570Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.084530218Z" level=info msg="runSandbox: removing pod sandbox from storage: 6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.087929448Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:23.087948962Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=e9a49420-03a0-4f1e-afca-0271074d114e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:23.088201 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:23.088259 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:23.088280 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:51:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:23.088329 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(6b5e565bea22992b9c07851ef556a768630e28ce704c6e79fa81f5e18a189fdb): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:51:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:25.283968 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.033913166Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.033954646Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount has successfully entered the 'dead' state. Jan 23 17:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount has successfully entered the 'dead' state. Jan 23 17:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bd9be786\x2d5bd3\x2d44bd\x2daf21\x2d66af292dcd27.mount has successfully entered the 'dead' state. Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.083313383Z" level=info msg="runSandbox: deleting pod ID 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159 from idIndex" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.083338183Z" level=info msg="runSandbox: removing pod sandbox 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.083352239Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.083364884Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.103414244Z" level=info msg="runSandbox: removing pod sandbox from storage: 6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.107033013Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:26.107051977Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=9b47b6bd-b1f2-4d15-976f-5ea9c63dad87 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:26.107267 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:26.107328 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:26.107351 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:26.107402 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(6d439820779eef5760743b7176e476160877e9d427dda52e29a78cd9b0c14159): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:26.996076 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:51:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:26.996590 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.031801663Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.031835868Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount has successfully entered the 'dead' state. Jan 23 17:51:27 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount has successfully entered the 'dead' state. Jan 23 17:51:27 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a13e0d48\x2d67c8\x2d4ad2\x2d958f\x2d7f2bab2c3288.mount has successfully entered the 'dead' state. Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.080288776Z" level=info msg="runSandbox: deleting pod ID c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa from idIndex" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.080314051Z" level=info msg="runSandbox: removing pod sandbox c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.080327023Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.080338064Z" level=info msg="runSandbox: unmounting shmPath for sandbox c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.101425963Z" level=info msg="runSandbox: removing pod sandbox from storage: c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.105136805Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:27.105163752Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=ba6fa717-0ba2-4267-8641-801d0bed1364 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:27.105482 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:27.105524 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:27.105548 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:27.105593 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(c44e22d985fb46cc082f8a65ac7a409dbece20f13310c0dff0b59c55ec4185fa): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915313 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915333 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915341 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915347 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915352 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915359 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:27.915365 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:51:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:28.360351689Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:51:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:31.995998 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:51:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:31.996623189Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:31.996669763Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:32.008805753Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/3d3dae5e-4f06-4228-958b-3d8bced8ed75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:32.008826410Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:33.996201 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:33.996602906Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:33.996646706Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.010944732Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6ada08f1-ea24-4c3e-9eff-1da78467a0d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.010979953Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.032930921Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.032961773Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount has successfully entered the 'dead' state. Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.078278637Z" level=info msg="runSandbox: deleting pod ID 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612 from idIndex" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.078301589Z" level=info msg="runSandbox: removing pod sandbox 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.078314030Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.078324385Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.095440772Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.098318655Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:34.098336221Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=66122a21-9bad-4470-b692-860d72bd6c18 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:34.098511 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:34.098553 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:51:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:34.098577 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:51:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:34.098624 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:51:34 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount has successfully entered the 'dead' state. Jan 23 17:51:34 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cdc1c143\x2d3ca3\x2d48b7\x2dadba\x2de285df4603ef.mount has successfully entered the 'dead' state. Jan 23 17:51:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d223cb59b2364263c5a15246db042cd6a106b015272b1a4a4d54c773f3c7612-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.033299327Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.033339841Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount has successfully entered the 'dead' state. Jan 23 17:51:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount has successfully entered the 'dead' state. Jan 23 17:51:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-411caa0c\x2d56b1\x2d499c\x2d8e10\x2d5a63ebe6173e.mount has successfully entered the 'dead' state. Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.082309552Z" level=info msg="runSandbox: deleting pod ID 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b from idIndex" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.082333133Z" level=info msg="runSandbox: removing pod sandbox 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.082346621Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.082358082Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.103429697Z" level=info msg="runSandbox: removing pod sandbox from storage: 6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.106945843Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.106963453Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=e77e7efa-107d-4109-b007-0ce64d3883a1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:35.107227 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:35.107278 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:51:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:35.107303 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:51:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:35.107355 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(6d7e5d9dc855310cac569c3992f3fcb2527f63e9c0c3f909406b144a078d7f6b): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:51:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:35.995814 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.996274885Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:35.996318889Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:36.008071339Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/db2cea1e-72c5-439e-a40f-82ee50a2206d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:36.008093546Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:36 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:36.996064 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:36.996453734Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:36.996499439Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.007500132Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/474ad0f7-84b2-4591-8144-bc40916e71ec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.007728776Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.030550000Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.030579353Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount has successfully entered the 'dead' state. Jan 23 17:51:37 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount has successfully entered the 'dead' state. Jan 23 17:51:37 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a5b329c2\x2d0a7a\x2d4d70\x2d8f8c\x2dbbae7dff3748.mount has successfully entered the 'dead' state. Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.080366059Z" level=info msg="runSandbox: deleting pod ID 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd from idIndex" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.080391983Z" level=info msg="runSandbox: removing pod sandbox 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.080405520Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.080418328Z" level=info msg="runSandbox: unmounting shmPath for sandbox 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.101414609Z" level=info msg="runSandbox: removing pod sandbox from storage: 72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.104277523Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:37.104295548Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=fe226164-adec-426d-8a8a-4ad942398bf0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:37.104497 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:37.104541 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:37.104566 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:37.104613 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(72fa7ff02f3733eaf0fc5f8df64f8eee158f9c91f009f948605755a31e4feccd): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.032347389Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.032386248Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount has successfully entered the 'dead' state. Jan 23 17:51:38 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount has successfully entered the 'dead' state. Jan 23 17:51:38 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3ca76a09\x2d99b3\x2d42a3\x2d9a84\x2d295fed57b63a.mount has successfully entered the 'dead' state. Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.083305141Z" level=info msg="runSandbox: deleting pod ID 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2 from idIndex" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.083329178Z" level=info msg="runSandbox: removing pod sandbox 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.083345207Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.083356532Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.099460176Z" level=info msg="runSandbox: removing pod sandbox from storage: 61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.102965257Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:38.102982488Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=4e91e664-a74f-4f68-9415-bb637b44572b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:38.103195 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:38.103284 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:38.103307 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:51:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:38.103351 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(61eeb504854f32376016d26e893d154164f6a3267cf528c4f79961157b7daff2): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:51:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:40.996339 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:40.996658041Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:40 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:40.996697503Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:40.997143 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:51:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:40.997665 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.007744131Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2f45196b-d781-4038-bf75-6d5cfab7ddbd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.007763150Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.017219423Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.017261500Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount has successfully entered the 'dead' state. Jan 23 17:51:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount has successfully entered the 'dead' state. Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.070385761Z" level=info msg="runSandbox: deleting pod ID fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32 from idIndex" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.070418926Z" level=info msg="runSandbox: removing pod sandbox fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.070434679Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.070451855Z" level=info msg="runSandbox: unmounting shmPath for sandbox fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.080465585Z" level=info msg="runSandbox: removing pod sandbox from storage: fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.083268268Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.083290053Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=72a6c65d-224a-46c4-8ca6-c3a8d1084a1c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:41.083582 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:41.083635 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:41.083661 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:41.083714 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:41.141679 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.142028378Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.142068405Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.152549174Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/5b9b4e45-db08-48df-9dbf-8350f4dfb790 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:41.152569874Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a50b74e1\x2d831c\x2d4eba\x2dbf0d\x2dd802e3fda6c9.mount has successfully entered the 'dead' state. Jan 23 17:51:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fcb6a4b610a67a05efaee004764d1c2b0b765f63cb4906811dcce5ed2ec23a32-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:41.484098 8631 cert_rotation.go:88] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.031465272Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.031525876Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount has successfully entered the 'dead' state. Jan 23 17:51:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount has successfully entered the 'dead' state. Jan 23 17:51:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7127cdf4\x2d7637\x2d4aac\x2d81b1\x2d996579788f15.mount has successfully entered the 'dead' state. Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.071273796Z" level=info msg="runSandbox: deleting pod ID f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962 from idIndex" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.071302658Z" level=info msg="runSandbox: removing pod sandbox f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.071315801Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.071327147Z" level=info msg="runSandbox: unmounting shmPath for sandbox f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.088472524Z" level=info msg="runSandbox: removing pod sandbox from storage: f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.091613548Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:44.091633075Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=ee3fa0e8-2610-48bc-940c-8eccd89cd703 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:44.091849 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:44.091892 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:44.091915 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:44.091961 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(f28ba34b1a0b32022fcbbe76efc61f4ffefd14cbb6a8950afb8fc390f8ecb962): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:51:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:47.997037 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:51:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:47.997210 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:51:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:47.997380592Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:47.997430266Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:47.997490811Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:47.997524794Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:48.012015676Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/6e3fb611-44a1-44d4-b69f-071b8eaf1fef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:48.012035026Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:48.012696205Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6d1287ea-70d5-4368-b5ea-071085c7745f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:48.012716172Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.031404238Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.031439839Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount has successfully entered the 'dead' state. Jan 23 17:51:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount has successfully entered the 'dead' state. Jan 23 17:51:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d340b6fd\x2de178\x2d4a09\x2dae57\x2dae75ec7debd9.mount has successfully entered the 'dead' state. Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.068307482Z" level=info msg="runSandbox: deleting pod ID 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99 from idIndex" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.068332946Z" level=info msg="runSandbox: removing pod sandbox 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.068347001Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.068358586Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.084400705Z" level=info msg="runSandbox: removing pod sandbox from storage: 9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.087729474Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:49.087748529Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a8a64d29-b81e-46c7-b588-8a8ed3c1d9b7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:49.087872 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:49.087913 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:51:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:49.087936 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:51:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:49.087982 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(9736a9a47b837d15532eeb3295782a3c26366d3617c5cd0b4c9a068e313a4c99): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:51:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:50.996178 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:50.996333 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:51:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:50.996546394Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:50.996583415Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:50.996664754Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:50.996698008Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:51.010564580Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9d38073b-158c-4350-86d1-eb56baf06608 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:51.010583812Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:51.012294864Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/dc993b49-6c98-4693-b77d-dcb63c6d97b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:51.012313923Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.036365218Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.036401343Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount has successfully entered the 'dead' state. Jan 23 17:51:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount has successfully entered the 'dead' state. Jan 23 17:51:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-00770331\x2df484\x2d4d50\x2d8cf4\x2d62dad4c0e6d7.mount has successfully entered the 'dead' state. Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.093305431Z" level=info msg="runSandbox: deleting pod ID 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c from idIndex" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.093329099Z" level=info msg="runSandbox: removing pod sandbox 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.093342394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.093355290Z" level=info msg="runSandbox: unmounting shmPath for sandbox 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.113447713Z" level=info msg="runSandbox: removing pod sandbox from storage: 75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.116435657Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.116454062Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=9b34166e-9711-4120-a347-8d348cfded3e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:52.116662 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:52.116859 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:52.116882 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:52.116932 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(75feba8d08077b43a3dd7c62c4d57430e51f775ffc2b90e025afae8948b20d9c): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:52.161358 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.161728532Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.161780899Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.172915695Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/110dbe6a-8ae7-40f9-b638-7906cef3bb76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:52.172937365Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:52.996421 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:51:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:51:52.996913 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:51:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:51:57.996663 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:51:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:57.997011474Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:51:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:57.997053741Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:51:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:58.009074650Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/f1e6370c-c4e3-470e-a6fa-ed2be6fead2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:51:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:58.009095507Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:51:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:51:58.146223160Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.069798092Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.069835265Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.075294413Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.075321572Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.080270880Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.080304100Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.081276724Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.081312180Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.082731799Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.082761756Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount has successfully entered the 'dead' state. Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.121276802Z" level=info msg="runSandbox: deleting pod ID 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0 from idIndex" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.121301957Z" level=info msg="runSandbox: removing pod sandbox 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.121316878Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.121330302Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.126299943Z" level=info msg="runSandbox: deleting pod ID dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09 from idIndex" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.126321890Z" level=info msg="runSandbox: removing pod sandbox dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.126336549Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.126347464Z" level=info msg="runSandbox: unmounting shmPath for sandbox dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129314563Z" level=info msg="runSandbox: deleting pod ID f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c from idIndex" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129340045Z" level=info msg="runSandbox: removing pod sandbox f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129352147Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129363706Z" level=info msg="runSandbox: unmounting shmPath for sandbox f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129319348Z" level=info msg="runSandbox: deleting pod ID 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6 from idIndex" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129427385Z" level=info msg="runSandbox: removing pod sandbox 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129438630Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.129450379Z" level=info msg="runSandbox: unmounting shmPath for sandbox 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.137309980Z" level=info msg="runSandbox: deleting pod ID 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982 from idIndex" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.137335065Z" level=info msg="runSandbox: removing pod sandbox 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.137347879Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.137360222Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.137457132Z" level=info msg="runSandbox: removing pod sandbox from storage: 2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.140653186Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.140672861Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=cef966ce-b45b-4697-b010-9a23fdabb90b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.140862 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.140926 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.140948 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.140992 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.141457654Z" level=info msg="runSandbox: removing pod sandbox from storage: dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.144638417Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.144656005Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=ed65bc57-df83-407f-88d2-50b70256df99 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.144879 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.144923 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.144949 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.144998 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.148410384Z" level=info msg="runSandbox: removing pod sandbox from storage: f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.148454777Z" level=info msg="runSandbox: removing pod sandbox from storage: 149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.151709385Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.151728528Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=e82c82e5-1c80-44ff-b09a-9071bd96e2f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.151984 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.152019 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.152040 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.152085 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.154723636Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.154740834Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=12d113d0-039e-480b-98ba-c536acd1eeb9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.154936 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.154970 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.154994 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.155032 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.156413091Z" level=info msg="runSandbox: removing pod sandbox from storage: 84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.159696300Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.159716424Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=949baf64-939a-4f9b-b7d5-355070deed37 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.159886 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.159925 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.159951 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:02.160001 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.178818 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.178965 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.179044 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.179104 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.179171 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179198918Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179236591Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179271335Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179281226Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179321380Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179350597Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179373976Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179238779Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179399420Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.179422402Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.197626449Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/fb07877a-9dee-4b7b-b040-f8a99200c2c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.197648387Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.198535693Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a9bfbbe6-578b-4882-b240-dd7a219db647 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.198554427Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.208878337Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0129727f-d19a-4b54-8167-50fc77aabf4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.209085840Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.213591994Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/314ef931-5ac0-437b-8c5a-cb1eb55b1710 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.213614498Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.214542102Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/8902e737-ff34-49eb-a81b-d989c14743a3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.214561842Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:02.995410 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.995810276Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:02.995857213Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:52:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:03.006114434Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/af2d0235-d153-4992-8f79-229e11160e91 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:03.006135979Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-7fd3b44c\x2df105\x2d4c7d\x2d8f51\x2d08f543d0b4bc.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cabf909a\x2de3ae\x2d4c39\x2db663\x2d2b9ee1e48774.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b6e6bd90\x2d1a5e\x2d421b\x2dbeb4\x2d1f308628f46c.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-f8b9151ef4b499accb528ef71908a66dd1bd80b9faedafc12896d18f8f89f84c-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-149afe9dd58eaea06e4209f2e8351fcb7b08c49a0650c7d8bbd50c9b0c7e48a6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9e7cd303\x2d4751\x2d436f\x2d9771\x2d0732d21ac956.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-88716d4a\x2d3c14\x2d46be\x2d9b40\x2d7252445b4d5c.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-2449df79dfd6857fe110d0d5396c0fab3ddd6fb55c0c74fd57ebc08a1495fab0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-84e324ce49c9ea51c8f0845bf8fb88941cee56f75c40babffb25b36734620982-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:52:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dde63bb0416667d88d286cd5d3c1e85b735c9a7022d006993d5ccc3f2f902a09-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:52:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:07.997113 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:52:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:07.997623 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:52:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:17.021217790Z" level=info msg="NetworkStart: stopping network for sandbox 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:17.021416495Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/3d3dae5e-4f06-4228-958b-3d8bced8ed75 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:17.021439252Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:17.021446150Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:17.021452389Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:19.025242095Z" level=info msg="NetworkStart: stopping network for sandbox 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:19.025389820Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6ada08f1-ea24-4c3e-9eff-1da78467a0d9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:19.025412885Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:19.025419203Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:19.025426360Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:19.997194 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:52:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:19.997819 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:52:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:21.021279187Z" level=info msg="NetworkStart: stopping network for sandbox ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:21.021425676Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/db2cea1e-72c5-439e-a40f-82ee50a2206d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:21.021448904Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:21.021456626Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:21.021462590Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:22.022349399Z" level=info msg="NetworkStart: stopping network for sandbox b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:22.022490164Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/474ad0f7-84b2-4591-8144-bc40916e71ec Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:22.022513577Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:22.022520331Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:22.022526626Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.021139709Z" level=info msg="NetworkStart: stopping network for sandbox 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.021299255Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2f45196b-d781-4038-bf75-6d5cfab7ddbd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.021324353Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.021331675Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.021339328Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.165298958Z" level=info msg="NetworkStart: stopping network for sandbox 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.165426761Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/5b9b4e45-db08-48df-9dbf-8350f4dfb790 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.165448029Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.165454538Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:26.165461007Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915652 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915671 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915677 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915687 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915694 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915702 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:27.915710 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:52:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:28.141281966Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:52:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:31.996447 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:52:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:31.996973 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.026547940Z" level=info msg="NetworkStart: stopping network for sandbox 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027197644Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/6e3fb611-44a1-44d4-b69f-071b8eaf1fef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027238245Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027246638Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027254560Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027552490Z" level=info msg="NetworkStart: stopping network for sandbox 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027653390Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/6d1287ea-70d5-4368-b5ea-071085c7745f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027672704Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027678434Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:33.027684483Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.024721603Z" level=info msg="NetworkStart: stopping network for sandbox 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.024865330Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/9d38073b-158c-4350-86d1-eb56baf06608 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.024888804Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.024895541Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.024902377Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.025072335Z" level=info msg="NetworkStart: stopping network for sandbox b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.025196813Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/dc993b49-6c98-4693-b77d-dcb63c6d97b2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.025226128Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.025232905Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:36.025238408Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:37.185945251Z" level=info msg="NetworkStart: stopping network for sandbox 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:37.186106307Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/110dbe6a-8ae7-40f9-b638-7906cef3bb76 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:37.186131584Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:37.186139095Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:37.186145721Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496358.1333] device (eno12409): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed') Jan 23 17:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496358.1337] device (eno12409): Activation: failed for connection 'Wired Connection' Jan 23 17:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496358.1338] device (eno12409): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed') Jan 23 17:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496358.1542] dhcp4 (eno12409): canceled DHCP transaction Jan 23 17:52:38 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496358.1544] dhcp6 (eno12409): canceled DHCP transaction Jan 23 17:52:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:43.021372509Z" level=info msg="NetworkStart: stopping network for sandbox 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:43.021533722Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/f1e6370c-c4e3-470e-a6fa-ed2be6fead2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:43.021556810Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:43.021563455Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:43.021569160Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:52:46.997780 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:52:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:52:47.001226 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.210612476Z" level=info msg="NetworkStart: stopping network for sandbox 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.210781154Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/fb07877a-9dee-4b7b-b040-f8a99200c2c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.210808533Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.210815731Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.210822773Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.213526138Z" level=info msg="NetworkStart: stopping network for sandbox 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.213668104Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/a9bfbbe6-578b-4882-b240-dd7a219db647 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.213692799Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.213699709Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.213706020Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.220881496Z" level=info msg="NetworkStart: stopping network for sandbox 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.220990742Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/0129727f-d19a-4b54-8167-50fc77aabf4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.221012787Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.221019113Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.221025068Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226107982Z" level=info msg="NetworkStart: stopping network for sandbox ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226233273Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/314ef931-5ac0-437b-8c5a-cb1eb55b1710 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226255885Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226263569Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226270342Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226876405Z" level=info msg="NetworkStart: stopping network for sandbox 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.226984044Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/8902e737-ff34-49eb-a81b-d989c14743a3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.227007336Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.227015311Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:47.227022635Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:48.019669583Z" level=info msg="NetworkStart: stopping network for sandbox a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:52:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:48.019803557Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/af2d0235-d153-4992-8f79-229e11160e91 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:52:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:48.019827539Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:52:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:48.019834313Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:52:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:48.019840426Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:52:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:52:58.140544292Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:53:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:01.997022 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:53:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:02.001650 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.033299591Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.033344057Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount has successfully entered the 'dead' state. Jan 23 17:53:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount has successfully entered the 'dead' state. Jan 23 17:53:02 hub-master-0.workload.bos2.lab systemd[1]: run-netns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-3d3dae5e\x2d4f06\x2d4228\x2d958b\x2d3d8bced8ed75.mount has successfully entered the 'dead' state. Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.077366397Z" level=info msg="runSandbox: deleting pod ID 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277 from idIndex" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.077400949Z" level=info msg="runSandbox: removing pod sandbox 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.077417073Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.077432951Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.097437351Z" level=info msg="runSandbox: removing pod sandbox from storage: 0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.100440519Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:02.100461191Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=263b3426-4419-4bdc-976f-dd75b43ce72a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:02.100683 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:02.100722 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:53:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:02.100744 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:53:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:02.100785 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(0f6a02c0546108f8edd7ef08fe8f5ec69116c352d7e305c388abb4bb7574c277): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.036065366Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.036109221Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount has successfully entered the 'dead' state. Jan 23 17:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount has successfully entered the 'dead' state. Jan 23 17:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6ada08f1\x2dea24\x2d4c3e\x2d9eff\x2d1da78467a0d9.mount has successfully entered the 'dead' state. Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.077307006Z" level=info msg="runSandbox: deleting pod ID 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420 from idIndex" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.077333008Z" level=info msg="runSandbox: removing pod sandbox 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.077347015Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.077358600Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.089438844Z" level=info msg="runSandbox: removing pod sandbox from storage: 3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.093034438Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:04.093052551Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=391d18e4-82a7-4819-8ceb-d1103602dce4 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:04.093282 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:04.093334 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:04.093358 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:04.093409 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(3a11db4428a9656ecb42552153f570eefe1bbfcb1961ba1709c602cd285e0420): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.033166104Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.033400212Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount has successfully entered the 'dead' state. Jan 23 17:53:06 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount has successfully entered the 'dead' state. Jan 23 17:53:06 hub-master-0.workload.bos2.lab systemd[1]: run-netns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-db2cea1e\x2d72c5\x2d439e\x2da40f\x2d82ee50a2206d.mount has successfully entered the 'dead' state. Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.084309144Z" level=info msg="runSandbox: deleting pod ID ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97 from idIndex" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.084334661Z" level=info msg="runSandbox: removing pod sandbox ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.084348146Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.084359877Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.095414043Z" level=info msg="runSandbox: removing pod sandbox from storage: ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.098995668Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:06.099015400Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=b59d0616-424e-4c35-a639-82dfb2c88914 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:06.099228 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:06.099272 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:53:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:06.099293 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:53:06 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:06.099339 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(ec7db81ff700f2192fd767e6af674c855bdbdfe8817664ffbdec49db47072e97): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.033174275Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.033215131Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount has successfully entered the 'dead' state. Jan 23 17:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount has successfully entered the 'dead' state. Jan 23 17:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-netns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-474ad0f7\x2d84b2\x2d4591\x2d8144\x2dbc40916e71ec.mount has successfully entered the 'dead' state. Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.091308596Z" level=info msg="runSandbox: deleting pod ID b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a from idIndex" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.091332121Z" level=info msg="runSandbox: removing pod sandbox b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.091345433Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.091357498Z" level=info msg="runSandbox: unmounting shmPath for sandbox b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.107452839Z" level=info msg="runSandbox: removing pod sandbox from storage: b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.110873296Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:07.110891275Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=91d6d0fb-84dc-44ee-aee6-54ea831c4270 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:07.111099 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:07.111144 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:07.111168 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:07.111223 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b0088b5671b13c3db1c95c64d56985df80117c0d734cffcb1fea01f1c08d792a): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.033350093Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.033386687Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2f45196b\x2dd781\x2d4038\x2dbf75\x2d6d5cfab7ddbd.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.072329142Z" level=info msg="runSandbox: deleting pod ID 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f from idIndex" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.072353616Z" level=info msg="runSandbox: removing pod sandbox 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.072368155Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.072380282Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.085450331Z" level=info msg="runSandbox: removing pod sandbox from storage: 84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.089055917Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.089074783Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=df203446-02af-4a82-a05f-26982712d242 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.089324 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.089364 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.089385 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.089431 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(84ef32f6ceb1796bd0e4e309098c6812f11b575b36fdba41354b124f723c7e5f): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.176268649Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.176301014Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.235297915Z" level=info msg="runSandbox: deleting pod ID 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473 from idIndex" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.235328242Z" level=info msg="runSandbox: removing pod sandbox 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.235341251Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.235351919Z" level=info msg="runSandbox: unmounting shmPath for sandbox 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.247431567Z" level=info msg="runSandbox: removing pod sandbox from storage: 314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.250755031Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.250772719Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=ddf35f1a-ad72-4e53-a845-1e8c336acc63 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.250984 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.251033 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.251060 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:11.251115 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:53:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:11.313562 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.313872064Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.313902714Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.325323857Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/bd5eb72b-fa62-4316-8d62-112f5626c6f0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:11.325343608Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5b9b4e45\x2ddb08\x2d48df\x2d9dbf\x2d8350f4dfb790.mount has successfully entered the 'dead' state. Jan 23 17:53:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-314d5084b9a56bf2da44c252a02855bc178320b4acb00637d0ae9801dc5a2473-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:12.996135 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:53:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:12.996547359Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:12.996599375Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:13.011918003Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/4a614cf7-bbaf-4b49-b875-08e01c91192d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:13.011946195Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:14.995852 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:14.996259 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:53:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:14.996239775Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:14.996285913Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:14 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:14.996758 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:15.007514826Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/4f4de206-1b34-4fe6-a37a-7448e6f35482 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:15.007557549Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.038153768Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.038190588Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.038172298Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.038245440Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6e3fb611\x2d44a1\x2d44d4\x2db69f\x2d071b8eaf1fef.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6d1287ea\x2d70d5\x2d4368\x2db5ea\x2d071085c7745f.mount has successfully entered the 'dead' state. Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082327125Z" level=info msg="runSandbox: deleting pod ID 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e from idIndex" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082358524Z" level=info msg="runSandbox: removing pod sandbox 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082372441Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082384086Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082330353Z" level=info msg="runSandbox: deleting pod ID 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda from idIndex" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082437587Z" level=info msg="runSandbox: removing pod sandbox 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082451164Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.082463586Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.102417814Z" level=info msg="runSandbox: removing pod sandbox from storage: 4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.102439821Z" level=info msg="runSandbox: removing pod sandbox from storage: 5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.105418310Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.105436703Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=101ead3b-a22b-40de-b494-edf89992d68d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.105676 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.105879 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.105901 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.105951 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.108468614Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.108485068Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=bee79b87-fb63-4dfe-9e69-292722621e72 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.108586 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.108617 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.108638 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:18.108675 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:53:18 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:18.996169 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.996575093Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:18.996617268Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:19.007252930Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/01b98e9d-1088-4190-aed4-15932d9b3715 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:19.007273430Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5ce2403f893981a30afe3567236b9de70e2b3c1abaf043d47493ef9a29cafeda-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:19 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4b639d8cd3275d828004f15ab0d76605e11c6d851f86bb510cb41d478b06527e-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:20 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:20.995862 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:20.996288797Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:20 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:20.996327842Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.006825609Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/0e427b19-4997-40f6-a9ba-06abd53d822d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.006845251Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.035641760Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.035674874Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.036135393Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.036177341Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.070305907Z" level=info msg="runSandbox: deleting pod ID 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3 from idIndex" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.070330870Z" level=info msg="runSandbox: removing pod sandbox 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.070346268Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.070357317Z" level=info msg="runSandbox: unmounting shmPath for sandbox 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.074406438Z" level=info msg="runSandbox: deleting pod ID b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb from idIndex" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.074440171Z" level=info msg="runSandbox: removing pod sandbox b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.074454805Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.074470363Z" level=info msg="runSandbox: unmounting shmPath for sandbox b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.086421588Z" level=info msg="runSandbox: removing pod sandbox from storage: 183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.086459471Z" level=info msg="runSandbox: removing pod sandbox from storage: b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.089431231Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.089451996Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=5e14e2c5-922d-4bde-a9a2-6cc317abb6df name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.089663 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.089706 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.089728 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.089775 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.092674288Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:21.092695102Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=8dad4b88-d235-4a42-b571-67fc990308e1 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.092889 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.092934 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.092955 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:21 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:21.092997 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-dc993b49\x2d6c98\x2d4693\x2db77d\x2ddcb63c6d97b2.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-netns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-9d38073b\x2d158c\x2d4350\x2d86d1\x2deb56baf06608.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b790f13dcac73844aad62dde9452e014c0dc1ebecc834ea0948e724b5297e3bb-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:21 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-183c073bd91fea5929f4c570ecb13ed9de9412380f344f50eed5906db0e456f3-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.198516668Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.198556330Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount has successfully entered the 'dead' state. Jan 23 17:53:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount has successfully entered the 'dead' state. Jan 23 17:53:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-110dbe6a\x2d8ae7\x2d40f9\x2db638\x2d7906cef3bb76.mount has successfully entered the 'dead' state. Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.249423352Z" level=info msg="runSandbox: deleting pod ID 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1 from idIndex" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.249450065Z" level=info msg="runSandbox: removing pod sandbox 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.249463126Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.249477245Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.258471075Z" level=info msg="runSandbox: removing pod sandbox from storage: 4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.262116049Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.262135754Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=2c118de4-07f6-4ff1-8624-715939252d22 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:22.262349 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:22.262398 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:22.262424 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:22.262474 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(4ecdd97547d7dbc4a40b267afcfe86016646b6b1b6766197bc65e516f3d5beb1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:53:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:22.346815 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.347152688Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.347200569Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.357768316Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/52d312c9-a7de-422a-bfe2-d5496c230fb9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:22.357788797Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:24.996274 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:53:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:24.996563942Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:24.996604242Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:25.010347262Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/1a42e121-f8e3-43df-8a8d-f49f426471c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:25.010374355Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:25.996359 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:53:25 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:25.996927 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916174 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916194 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916202 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916212 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916221 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916227 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:27.916237 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.031789571Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.031840887Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount has successfully entered the 'dead' state. Jan 23 17:53:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount has successfully entered the 'dead' state. Jan 23 17:53:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f1e6370c\x2dc4e3\x2d470e\x2da6fa\x2ded2be6fead2d.mount has successfully entered the 'dead' state. Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.076311288Z" level=info msg="runSandbox: deleting pod ID 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d from idIndex" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.076341022Z" level=info msg="runSandbox: removing pod sandbox 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.076360238Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.076382816Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.092460173Z" level=info msg="runSandbox: removing pod sandbox from storage: 4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.095267669Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.095286634Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=118f2580-fe19-4744-9ba5-22660fa5e1ef name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:28.095530 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:28.095573 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:28.095598 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:28.095648 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(4439e73942365764073bae696466a5cde5612dad990467fe3cbbb4ab1fa6766d): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:53:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:28.142355877Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:53:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:31.995585 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:53:31 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:31.995771 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:53:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:31.996094084Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:31.996319278Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:31.996137139Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:31.996494986Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.011568237Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e3d93049-9ab1-4cba-b3ab-bb619ce5ca0a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.011588853Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.013914007Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/fd67f0f6-31a6-4d81-9bc6-3d7e07a483ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.013936505Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.222609689Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.222644070Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.225588198Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.225621836Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount has successfully entered the 'dead' state. Jan 23 17:53:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount has successfully entered the 'dead' state. Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.230870763Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.230899399Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.236639925Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.236672977Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.236989491Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.237025139Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.268312004Z" level=info msg="runSandbox: deleting pod ID 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30 from idIndex" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.268337149Z" level=info msg="runSandbox: removing pod sandbox 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.268351142Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.268363563Z" level=info msg="runSandbox: unmounting shmPath for sandbox 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.270321245Z" level=info msg="runSandbox: deleting pod ID 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce from idIndex" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.270350400Z" level=info msg="runSandbox: removing pod sandbox 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.270365632Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.270379617Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.272305050Z" level=info msg="runSandbox: deleting pod ID 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b from idIndex" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.272329669Z" level=info msg="runSandbox: removing pod sandbox 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.272343472Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.272354461Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280285167Z" level=info msg="runSandbox: deleting pod ID 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1 from idIndex" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280309290Z" level=info msg="runSandbox: removing pod sandbox 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280323840Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280336279Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280287565Z" level=info msg="runSandbox: deleting pod ID ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f from idIndex" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280394052Z" level=info msg="runSandbox: removing pod sandbox ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280406601Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.280419300Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.284461144Z" level=info msg="runSandbox: removing pod sandbox from storage: 57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.285433558Z" level=info msg="runSandbox: removing pod sandbox from storage: 8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.287490823Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.287511208Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=e42a93ae-6d0e-4df2-89f5-5e4086f872d0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.287807 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.287857 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.287881 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.287932 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.288433346Z" level=info msg="runSandbox: removing pod sandbox from storage: 4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.290659752Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.290676808Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=9a93f654-b936-4fee-9f19-27174920e41b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.290906 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.290939 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.290959 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.290995 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.291481074Z" level=info msg="runSandbox: removing pod sandbox from storage: 41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.291490582Z" level=info msg="runSandbox: removing pod sandbox from storage: ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.293611488Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.293628964Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=c2623c53-d9e0-4eb2-93c0-fe754e8b9867 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.293844 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.293876 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.293898 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.293936 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.297008706Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.297029996Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=2f093e51-35b6-4673-aa1a-b6290787ccbd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.297243 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.297275 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.297296 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.297330 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.300139333Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.300158611Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=d57230a2-0c69-4237-8bb2-79dac128c817 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.300261 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.300305 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.300325 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:32.300362 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.366357 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.366411 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.366541 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366565760Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366594196Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.366599 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366676485Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366704817Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.366696 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366792884Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366809480Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366819346Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366861476Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366885564Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.366909997Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.398237331Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/956d844c-4948-4047-9c9e-faaafec476f7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.398280157Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.399419645Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/1796cc24-bce6-48e0-9aa1-ab596a8e3b43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.399439197Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.400559345Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2a41d728-372b-43e4-abca-60e97938fe21 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.400580299Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.402028614Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5b3fdcf4-22ca-471f-9f25-d6b66fd06692 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.402049370Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.403074384Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0e9e4284-4f0d-4e0c-941c-e8a56e244b70 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.403093386Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:32.996165 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.996536543Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:32.996581820Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.007477235Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/85bb2206-85c5-4a91-b601-103ab9829a13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.007506059Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8902e737\x2dff34\x2d49eb\x2da81b\x2dd989c14743a3.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-314ef931\x2d5ac0\x2d437b\x2d8c5a\x2dcb1eb55b1710.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-41609d95f69a5c2b3036816f75fa231f00f2d04fadf1ccf25c7cf34516ad8bd1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0129727f\x2dd19a\x2d4b54\x2d8167\x2d50fc77aabf4d.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ec66ab9f5573bb784e78e09022cd9edeb0d9a9fcb9a8f583d25a175b2950d80f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a9bfbbe6\x2d578b\x2d4882\x2db240\x2ddd7a219db647.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fb07877a\x2d9dee\x2d4b7b\x2db040\x2df8a99200c2c1.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-57b1c2de39f1419358f02e9f93df85d5d9f8519f825e101c2c905d7a678cdd30-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4be16e46b90d6baea5cd771ffa2faad3ebe75e58ac4c3ad97cb7e10cae82924b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8635a66c9e3d0c6decfb34ac1b04c6bee2558496666dd2f64f4c584d48e828ce-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.030615057Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.030652918Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-af2d0235\x2dd153\x2d4992\x2d8f79\x2d229e11160e91.mount has successfully entered the 'dead' state. Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.075307460Z" level=info msg="runSandbox: deleting pod ID a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd from idIndex" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.075333509Z" level=info msg="runSandbox: removing pod sandbox a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.075348109Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.075361683Z" level=info msg="runSandbox: unmounting shmPath for sandbox a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.091424913Z" level=info msg="runSandbox: removing pod sandbox from storage: a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.094686014Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.094706855Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=930a7deb-332e-4504-9537-22647f7f826d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:33.094936 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:53:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:33.094981 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:53:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:33.095003 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:53:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:33.095053 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:53:33 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:33.995823 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.996358818Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:33 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:33.996410780Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:34.007959349Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/15dd095f-ac0f-433a-9b86-d5cdf4524396 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:34 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:34.007980227Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:34 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a0c514668c8569f70547c20dec679800f5a535f2b090b28680840ee329b95ecd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:53:37 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:37.997769 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:53:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:37.998582000Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=8f60653e-44c3-4c7e-944b-94d2f787039f name=/runtime.v1.ImageService/ImageStatus Jan 23 17:53:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:37.998775170Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8f60653e-44c3-4c7e-944b-94d2f787039f name=/runtime.v1.ImageService/ImageStatus Jan 23 17:53:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:37.999276765Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf" id=c1497ed4-53cc-4153-bc0e-4bd3dd1b5a69 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:53:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:37.999388025Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a1141020a50fec6d1897234d017b639449e6dfa0fa7ed02e544d528cab61c50d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74fcd7470b682be673ccbc763ac25783f6997a253c8ca20f63b789520eb65bf],Size_:1101922975,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c1497ed4-53cc-4153-bc0e-4bd3dd1b5a69 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.000296242Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2182ad92-f0f4-4de8-b244-65485d7c09fa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.000379048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope. -- Subject: Unit crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2. -- Subject: Unit crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.125122933Z" level=info msg="Created container 890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=2182ad92-f0f4-4de8-b244-65485d7c09fa name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.125784564Z" level=info msg="Starting container: 890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" id=0fdacb6f-821d-4986-acb9-e02f441edd19 name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.144540723Z" level=info msg="Started container" PID=191721 containerID=890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2 description=openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node id=0fdacb6f-821d-4986-acb9-e02f441edd19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39344ed0f2226495a337fe0685058056a44c2cac73c45af7b450942b8876844b Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.148857425Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.158786394Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.158806507Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.158819883Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.167824621Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.167844542Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.167856140Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.176334376Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.176352774Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.176364217Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.184119090Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.184136151Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.184145903Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.192228149Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:53:38 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:38.192246216Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:53:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:38.378385 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:53:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:38.379685 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerStarted Data:890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2} Jan 23 17:53:38 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:38.380081 8631 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" Jan 23 17:53:38 hub-master-0.workload.bos2.lab conmon[191706]: conmon 890f73fe1f6213114d64 : container 191721 exited with status 1 Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has successfully entered the 'dead' state. Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope: Consumed 580ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope completed and consumed the indicated resources. Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope has successfully entered the 'dead' state. Jan 23 17:53:38 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope: Consumed 51ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2.scope completed and consumed the indicated resources. Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.382980 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/198.log" Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.383346 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/197.log" Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.384343 8631 generic.go:296] "Generic (PLEG): container finished" podID=409cdcf0-1eab-47ad-9389-ad5809e748ff containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" exitCode=1 Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.384367 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" event=&{ID:409cdcf0-1eab-47ad-9389-ad5809e748ff Type:ContainerDied Data:890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2} Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.384388 8631 scope.go:115] "RemoveContainer" containerID="fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" Jan 23 17:53:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:39.384885873Z" level=info msg="Removing container: fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205" id=10afbd31-6a82-4bc2-a9b3-9afcfca7f4e7 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:39.385253 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:53:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:39.385749 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:39 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-5b47b50a0f4672c1473de1802e7e93049ebfe6df476dc6d86956d08472f1c433-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-5b47b50a0f4672c1473de1802e7e93049ebfe6df476dc6d86956d08472f1c433-merged.mount has successfully entered the 'dead' state. Jan 23 17:53:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:39.421274572Z" level=info msg="Removed container fcf012811a61da80c03673ce0a1db6682915111f624fc946ba79108115847205: openshift-ovn-kubernetes/ovnkube-node-897lw/ovnkube-node" id=10afbd31-6a82-4bc2-a9b3-9afcfca7f4e7 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:53:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:40.387958 8631 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-897lw_409cdcf0-1eab-47ad-9389-ad5809e748ff/ovnkube-node/198.log" Jan 23 17:53:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:40.390375 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:53:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:40.390878 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:41.995918 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:53:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:41.996411418Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:41.996468541Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:42.008297504Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/09a65fe4-c329-4480-921e-7be6fc2525e6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:42.008321877Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:45.995657 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:53:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:45.996068907Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:45.996123600Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:53:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:46.007448918Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a7ae5d05-1827-4740-97e5-2eee13bfd1d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:46.007470642Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:53:51.997068 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:53:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:53:51.997628 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:53:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:56.339693282Z" level=info msg="NetworkStart: stopping network for sandbox 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:56.339921459Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/bd5eb72b-fa62-4316-8d62-112f5626c6f0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:56.339946613Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:53:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:56.339955035Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:53:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:56.339963987Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.025444662Z" level=info msg="NetworkStart: stopping network for sandbox fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.025586487Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/4a614cf7-bbaf-4b49-b875-08e01c91192d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.025609012Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.025616176Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.025622660Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:53:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:53:58.146693231Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:54:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:00.021538254Z" level=info msg="NetworkStart: stopping network for sandbox b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:00.021713267Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988 UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/4f4de206-1b34-4fe6-a37a-7448e6f35482 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:00.021739893Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:00.021747231Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:00.021755111Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:04.021138598Z" level=info msg="NetworkStart: stopping network for sandbox a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:04.021293238Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/01b98e9d-1088-4190-aed4-15932d9b3715 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:04.021317405Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:04.021324769Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:04.021332014Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:05.996167 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:54:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:05.996788 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:06.021287616Z" level=info msg="NetworkStart: stopping network for sandbox d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:06.021474777Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131 UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/0e427b19-4997-40f6-a9ba-06abd53d822d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:06.021506752Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:06.021515479Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:06 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:06.021524501Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:07.371096393Z" level=info msg="NetworkStart: stopping network for sandbox 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:07.371240715Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/52d312c9-a7de-422a-bfe2-d5496c230fb9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:07.371263870Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:07.371270328Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:07.371276844Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:10.023589389Z" level=info msg="NetworkStart: stopping network for sandbox 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:10.023730470Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/1a42e121-f8e3-43df-8a8d-f49f426471c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:10.023755604Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:10.023763509Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:10.023770835Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.024317022Z" level=info msg="NetworkStart: stopping network for sandbox 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.024462711Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/e3d93049-9ab1-4cba-b3ab-bb619ce5ca0a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.024485114Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.024495403Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.024502190Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.026235947Z" level=info msg="NetworkStart: stopping network for sandbox 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.026356481Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/fd67f0f6-31a6-4d81-9bc6-3d7e07a483ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.026379212Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.026385994Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.026391904Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.412669570Z" level=info msg="NetworkStart: stopping network for sandbox 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.412804463Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/1796cc24-bce6-48e0-9aa1-ab596a8e3b43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.412826119Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.412832563Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.412839630Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.413254821Z" level=info msg="NetworkStart: stopping network for sandbox 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.413380480Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/956d844c-4948-4047-9c9e-faaafec476f7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.413402778Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.413410682Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.413417065Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416163076Z" level=info msg="NetworkStart: stopping network for sandbox d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416305287Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/5b3fdcf4-22ca-471f-9f25-d6b66fd06692 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416330701Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416337986Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416343884Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416485884Z" level=info msg="NetworkStart: stopping network for sandbox 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416579931Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/0e9e4284-4f0d-4e0c-941c-e8a56e244b70 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416599150Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416605396Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.416610954Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.418211659Z" level=info msg="NetworkStart: stopping network for sandbox 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.418339379Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/2a41d728-372b-43e4-abca-60e97938fe21 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.418364133Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.418371849Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:17.418378860Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:18.019592061Z" level=info msg="NetworkStart: stopping network for sandbox 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:18.019714810Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/85bb2206-85c5-4a91-b601-103ab9829a13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:18.019736077Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:18.019743252Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:18.019749540Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:19.021109568Z" level=info msg="NetworkStart: stopping network for sandbox ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:19.021273320Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/15dd095f-ac0f-433a-9b86-d5cdf4524396 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:19.021299269Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:19.021306725Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:19 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:19.021314077Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:19.996820 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:54:19 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:19.997345 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:54:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:27.021941255Z" level=info msg="NetworkStart: stopping network for sandbox 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:27.022109355Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/09a65fe4-c329-4480-921e-7be6fc2525e6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:27.022138397Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:27.022145325Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:27.022151796Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916473 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916494 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916500 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916508 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916514 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916522 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:27.916528 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:54:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:28.141191418Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:54:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:31.021341583Z" level=info msg="NetworkStart: stopping network for sandbox 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:31.021648713Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/a7ae5d05-1827-4740-97e5-2eee13bfd1d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:31.021674609Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:54:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:31.021682776Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:54:31 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:31.021689860Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:34.996380 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:54:34 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:34.996888 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.351456915Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.351505666Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount has successfully entered the 'dead' state. Jan 23 17:54:41 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount has successfully entered the 'dead' state. Jan 23 17:54:41 hub-master-0.workload.bos2.lab systemd[1]: run-netns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-bd5eb72b\x2dfa62\x2d4316\x2d8d62\x2d112f5626c6f0.mount has successfully entered the 'dead' state. Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.407354024Z" level=info msg="runSandbox: deleting pod ID 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a from idIndex" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.407386101Z" level=info msg="runSandbox: removing pod sandbox 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.407413714Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.407431092Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.419458666Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.422446526Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.422466663Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=4cbbd999-8abb-49ec-a2e6-5f23d2ab55fa name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:41.422686 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:41.422730 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:41.422754 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:41.422802 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:54:41 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d370350fd961a5c7b1f87387d138315d5c77e880b004e28f6a7cdebb8782e9a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:41.498381 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.498683261Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.498722800Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.510153121Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b25cf40c-2dec-44b1-9ce4-4efd8e7bea22 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:41 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:41.510172811Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.036846046Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.036887915Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount has successfully entered the 'dead' state. Jan 23 17:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount has successfully entered the 'dead' state. Jan 23 17:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4a614cf7\x2dbbaf\x2d4b49\x2db875\x2d08e01c91192d.mount has successfully entered the 'dead' state. Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.083287509Z" level=info msg="runSandbox: deleting pod ID fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1 from idIndex" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.083313628Z" level=info msg="runSandbox: removing pod sandbox fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.083327757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.083338427Z" level=info msg="runSandbox: unmounting shmPath for sandbox fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.103442679Z" level=info msg="runSandbox: removing pod sandbox from storage: fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.106782250Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:43.106799789Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=f9b2c009-e0c1-46d8-9f0d-5e922e5ba427 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:43.106934 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:43.106977 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:43.106999 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:54:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:43.107044 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(fddb80d2725ca88d5b794969a2cb0c11a755b6d57bbfab399bacc9e4c951b0b1): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.032930515Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.032972683Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount has successfully entered the 'dead' state. Jan 23 17:54:45 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount has successfully entered the 'dead' state. Jan 23 17:54:45 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4f4de206\x2d1b34\x2d4fe6\x2da37a\x2d7448e6f35482.mount has successfully entered the 'dead' state. Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.068310256Z" level=info msg="runSandbox: deleting pod ID b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988 from idIndex" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.068338310Z" level=info msg="runSandbox: removing pod sandbox b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.068351498Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.068363506Z" level=info msg="runSandbox: unmounting shmPath for sandbox b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.080452913Z" level=info msg="runSandbox: removing pod sandbox from storage: b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.083942657Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:45.083960303Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=447657a9-d40a-4c45-b538-9d28004775a9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:45.084160 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:45.084213 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:54:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:45.084237 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:54:45 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:45.084286 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(b970e09277035488762799c5be704b67b98c9306cd8f19fde5a19b2db0686988): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:46.996227 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:54:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:46.996930 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:47.432720 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 17:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:47.455259 8631 csr.go:261] certificate signing request csr-v6w46 is approved, waiting to be issued Jan 23 17:54:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:47.473725 8631 csr.go:257] certificate signing request csr-v6w46 is issued Jan 23 17:54:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:48.474772 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-02-22 14:05:15 +0000 UTC, rotation deadline is 2023-02-17 10:43:41.915652306 +0000 UTC Jan 23 17:54:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:48.474791 8631 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 592h48m53.440862902s for next certificate rotation Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.034477093Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.034524664Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount has successfully entered the 'dead' state. Jan 23 17:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount has successfully entered the 'dead' state. Jan 23 17:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-netns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-01b98e9d\x2d1088\x2d4190\x2daed4\x2d15932d9b3715.mount has successfully entered the 'dead' state. Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.074400406Z" level=info msg="runSandbox: deleting pod ID a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85 from idIndex" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.074430738Z" level=info msg="runSandbox: removing pod sandbox a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.074446965Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.074465657Z" level=info msg="runSandbox: unmounting shmPath for sandbox a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.082457645Z" level=info msg="runSandbox: removing pod sandbox from storage: a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.085914475Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:49.085934341Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=1916c43b-7ec7-480f-a6b4-00926f988e38 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:49.086140 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:49.086184 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:49.086212 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:54:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:49.086260 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(a39190051787b2442ac1ddc9ac4f947b46dce122ff331ea3113f44af847a3d85): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.032782862Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.032827109Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount has successfully entered the 'dead' state. Jan 23 17:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount has successfully entered the 'dead' state. Jan 23 17:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e427b19\x2d4997\x2d40f6\x2da9ba\x2d06abd53d822d.mount has successfully entered the 'dead' state. Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.084290689Z" level=info msg="runSandbox: deleting pod ID d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131 from idIndex" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.084315627Z" level=info msg="runSandbox: removing pod sandbox d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.084331341Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.084342830Z" level=info msg="runSandbox: unmounting shmPath for sandbox d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.105459594Z" level=info msg="runSandbox: removing pod sandbox from storage: d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.108973356Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:51.108990986Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=b065bbc3-e39e-4046-99e5-6de81992dd07 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:51.109224 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:51.109268 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:51.109291 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:54:51 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:51.109339 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(d315d3bceeafd85732e115ddc9b9a489eae54317dc2f8603a1d5a3751e6a9131): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.381598705Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.381641607Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount has successfully entered the 'dead' state. Jan 23 17:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount has successfully entered the 'dead' state. Jan 23 17:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-netns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-52d312c9\x2da7de\x2d422a\x2dbfe2\x2dd5496c230fb9.mount has successfully entered the 'dead' state. Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.422311027Z" level=info msg="runSandbox: deleting pod ID 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1 from idIndex" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.422336976Z" level=info msg="runSandbox: removing pod sandbox 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.422350839Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.422363976Z" level=info msg="runSandbox: unmounting shmPath for sandbox 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.435434251Z" level=info msg="runSandbox: removing pod sandbox from storage: 14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.438974293Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.438992378Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=b24ec550-838d-4f0f-af0f-047e486b8446 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:52.439209 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:52.439251 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:52.439273 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:52.439320 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(14233de89239733570a245875753d1b1591a9eb9a45131b98c44d7f8aac7b2d1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:54:52 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:52.520999 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.521360480Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.521390511Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.531904523Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/da5f127a-facd-4a7e-b3a3-3b0cde0559b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:52 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:52.531924228Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:53.996420 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:53.996803484Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:53 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:53.996845351Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:54:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:54.008048710Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/17895415-ee5b-4ef9-904d-f05b2dd4cdba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:54.008082720Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.036324627Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.036367201Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount has successfully entered the 'dead' state. Jan 23 17:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount has successfully entered the 'dead' state. Jan 23 17:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1a42e121\x2df8e3\x2d43df\x2d8a8d\x2df49f426471c8.mount has successfully entered the 'dead' state. Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.090398292Z" level=info msg="runSandbox: deleting pod ID 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6 from idIndex" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.090424289Z" level=info msg="runSandbox: removing pod sandbox 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.090437794Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.090448686Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.106427657Z" level=info msg="runSandbox: removing pod sandbox from storage: 4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.109433451Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:55.109451743Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=0b91774e-6eb3-4a04-9489-752cd90433ca name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:55.109641 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:55.109686 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:55.109710 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:54:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:54:55.109757 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4cab8bc44e0cbedc33c5c59fbffdc56ac50324478b08771187a10a995ed4fba6): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:54:57 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:54:57.996200 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:54:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:57.996528134Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:54:57 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:57.996778725Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:58.008150959Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6ac72209-1b34-413a-8114-5ff9e1b86f63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:58.008172763Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:54:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:54:58.141775776Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:55:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:01.995800 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:01.996151535Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:01.996198260Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:01.996636 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:55:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:01.997139 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.007357398Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cfa8490b-f1c1-4cbe-a413-988f84952123 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.007378100Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.035607304Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.035638419Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.036023702Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.036052688Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount has successfully entered the 'dead' state. Jan 23 17:55:02 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount has successfully entered the 'dead' state. Jan 23 17:55:02 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount has successfully entered the 'dead' state. Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.076306036Z" level=info msg="runSandbox: deleting pod ID 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65 from idIndex" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.076332462Z" level=info msg="runSandbox: removing pod sandbox 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.076346653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.076360114Z" level=info msg="runSandbox: unmounting shmPath for sandbox 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.077303295Z" level=info msg="runSandbox: deleting pod ID 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126 from idIndex" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.077329874Z" level=info msg="runSandbox: removing pod sandbox 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.077342254Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.077354753Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.092457551Z" level=info msg="runSandbox: removing pod sandbox from storage: 8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.092489174Z" level=info msg="runSandbox: removing pod sandbox from storage: 98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.095450130Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.095468532Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=d1782ca8-09bd-426b-a2b5-a324a4505264 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.095681 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.095726 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.095749 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.095794 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.098664095Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.098684594Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=f1bc2cc2-5953-496f-9069-685a8f9f43f0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.098847 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.098877 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.098898 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.098935 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.423737735Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.423764842Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.424671989Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.424699929Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.427164509Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.427203285Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.427564784Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.427591087Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.429822984Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.429862448Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473317332Z" level=info msg="runSandbox: deleting pod ID 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab from idIndex" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473348679Z" level=info msg="runSandbox: removing pod sandbox 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473361417Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473374736Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473318376Z" level=info msg="runSandbox: deleting pod ID 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce from idIndex" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473426504Z" level=info msg="runSandbox: removing pod sandbox 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473438400Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.473451314Z" level=info msg="runSandbox: unmounting shmPath for sandbox 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481320513Z" level=info msg="runSandbox: deleting pod ID 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0 from idIndex" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481348099Z" level=info msg="runSandbox: removing pod sandbox 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481360607Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481371476Z" level=info msg="runSandbox: unmounting shmPath for sandbox 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481322642Z" level=info msg="runSandbox: deleting pod ID d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b from idIndex" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481430536Z" level=info msg="runSandbox: removing pod sandbox d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481441992Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.481453678Z" level=info msg="runSandbox: unmounting shmPath for sandbox d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.482308504Z" level=info msg="runSandbox: deleting pod ID 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd from idIndex" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.482337857Z" level=info msg="runSandbox: removing pod sandbox 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.482352284Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.482368186Z" level=info msg="runSandbox: unmounting shmPath for sandbox 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.493441278Z" level=info msg="runSandbox: removing pod sandbox from storage: 79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.493442904Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.496722262Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.496740654Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=b6b35e0b-84f2-42bc-9d85-6014b186f88c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.496964 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.497002 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.497024 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.497071 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.499654894Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.499672745Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=c4ee0e42-1508-4279-b0e9-b7f4dc8f75e9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.499842 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.499873 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.499894 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.499930 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.502496628Z" level=info msg="runSandbox: removing pod sandbox from storage: 11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.502513979Z" level=info msg="runSandbox: removing pod sandbox from storage: 390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.502516509Z" level=info msg="runSandbox: removing pod sandbox from storage: d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.505864209Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.505883920Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=a57abca9-c9ab-48a3-b8a7-476c2a1b5780 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.506097 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.506128 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.506160 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.506197 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.508947271Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.508967851Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=f719ccc0-4895-41b1-b05a-c5d710f2b35b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.509156 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.509187 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.509218 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.509253 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.514237550Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.515056661Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=e2bffaca-4197-4889-bd08-94736281c16e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.515239 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.515271 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.515291 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:02.515329 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:02.542001 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:02.542173 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:02.542268 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542329099Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542368573Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:02.542356 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542469685Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542501464Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:02 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:02.542538 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542607403Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542642670Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542701078Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542735040Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542885606Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.542917636Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.569629298Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/4015694f-f795-40fc-94d4-cd927e2e3d88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.569650177Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.570392096Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/d35ed7ae-f529-4dd1-86e3-11d4aaeb6e25 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.570410609Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.572097875Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5fedca7c-af01-4eba-8846-f68530340158 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.572116146Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.572768831Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/52d51e87-f602-4484-b984-99ad8926798b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.572789509Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.574346857Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f5fec57d-f8b3-4fe9-8950-aa70d90995a1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:02.574370157Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-0e9e4284\x2d4f0d\x2d4e0c\x2d941c\x2de8a56e244b70.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5b3fdcf4\x2d22ca\x2d471f\x2d9f25\x2dd6b66fd06692.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-2a41d728\x2d372b\x2d43e4\x2dabca\x2d60e97938fe21.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-1796cc24\x2dbce6\x2d48e0\x2d9aa1\x2dab596a8e3b43.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-956d844c\x2d4948\x2d4047\x2d9c9e\x2dfaaafec476f7.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-390b10260453506644ec05c4d0b90067f5fb31f6d95c83cc5de3e551b0c6edf0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d4449fadb1b3278c9c01e1f27400d04a9ad5a317f104026a3c65f49daa061b4b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-11cca0ab156d6ef900952838e130490788a080a600705864d456320199549dcd-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-79262b55c52375eb29dc4d4518dd74192aa6c703ecd14a6a127f9d6ea78be0ce-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-9d55c9845870de4cd80f7e4f8c68670efb5b14678782e54eab17c2668758c6ab-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-fd67f0f6\x2d31a6\x2d4d81\x2d9bc6\x2d3d7e07a483ff.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-e3d93049\x2d9ab1\x2d4cba\x2db3ab\x2dbb619ce5ca0a.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-8f7ff8ff9d7378ce1be5deb21cd79d9e0a795d9c4203870c87545f26354ff126-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-98606a10f2d0b0aeaa17dcfb9f4967331d051738790cd28273366e68dc6d5b65-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.030746634Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.030784901Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-netns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-85bb2206\x2d85c5\x2d4a91\x2db601\x2d103ab9829a13.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.060279926Z" level=info msg="runSandbox: deleting pod ID 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0 from idIndex" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.060306977Z" level=info msg="runSandbox: removing pod sandbox 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.060321908Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.060335121Z" level=info msg="runSandbox: unmounting shmPath for sandbox 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.072470091Z" level=info msg="runSandbox: removing pod sandbox from storage: 27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.075836676Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:03.075856450Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=37bb9526-2caa-4c13-9347-7a8d35e8c982 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:03.076419 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:03.076466 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:03.076490 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:03 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:03.076536 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(27bf00fc6a4f8780aeebc211c5bde3c467d613bc7faa2ab4849d541b7fa0c0a0): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.032272361Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.032327348Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount has successfully entered the 'dead' state. Jan 23 17:55:04 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount has successfully entered the 'dead' state. Jan 23 17:55:04 hub-master-0.workload.bos2.lab systemd[1]: run-netns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-15dd095f\x2dac0f\x2d433a\x2d9b86\x2dd5cdf4524396.mount has successfully entered the 'dead' state. Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.077320319Z" level=info msg="runSandbox: deleting pod ID ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774 from idIndex" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.077347874Z" level=info msg="runSandbox: removing pod sandbox ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.077364204Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.077382610Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.090458229Z" level=info msg="runSandbox: removing pod sandbox from storage: ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.095340459Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.095390793Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=473baa71-4e01-4497-b81a-d8903fd51add name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:04.095637 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:04.095797 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:04.095819 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:04.095869 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(ae47033afd5f12b1614b05f28b5fc588883f780cf8081d965edc3fc78bd12774): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:55:04 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:04.995709 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.996042541Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:04 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:04.996086422Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:05.006898998Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4c52b276-688d-4f70-b6c2-d10274b7afb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:05 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:05.006921678Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:08 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:08.995555 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:55:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:08.995933032Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:08.995990275Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:09.007890721Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/847e2c2c-8bfd-44e7-b2ec-2a24d123e0d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:09 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:09.007913697Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.033198154Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.033253705Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount has successfully entered the 'dead' state. Jan 23 17:55:12 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount has successfully entered the 'dead' state. Jan 23 17:55:12 hub-master-0.workload.bos2.lab systemd[1]: run-netns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-09a65fe4\x2dc329\x2d4480\x2d921e\x2d7be6fc2525e6.mount has successfully entered the 'dead' state. Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.073310214Z" level=info msg="runSandbox: deleting pod ID 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909 from idIndex" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.073342423Z" level=info msg="runSandbox: removing pod sandbox 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.073358606Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.073372872Z" level=info msg="runSandbox: unmounting shmPath for sandbox 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.089438617Z" level=info msg="runSandbox: removing pod sandbox from storage: 871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.092374737Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:12.092393970Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=c5d242fa-24e7-4c7d-813e-1ba64251ff33 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:12.092693 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:12.092734 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:12.092757 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:12.092806 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(871b2ffea39187a547df63aba49431a2d52c8581ef58a4df52d352b91e782909): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:55:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:13.995809 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:55:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:13.996220395Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:13.996273671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:13.996613 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:55:13 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:13.997107 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:55:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:14.007947196Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4955f22f-58bc-4e02-a776-4eff5a01d9a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:14 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:14.007967853Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:15 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:15.996302 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:15.996627571Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:15.996686561Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.007720608Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/8cea57e0-6a26-4c90-be0e-0f9ecd5fe42a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.007742232Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.032274643Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.032309006Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount has successfully entered the 'dead' state. Jan 23 17:55:16 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount has successfully entered the 'dead' state. Jan 23 17:55:16 hub-master-0.workload.bos2.lab systemd[1]: run-netns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-a7ae5d05\x2d1827\x2d4740\x2d97e5\x2d2eee13bfd1d1.mount has successfully entered the 'dead' state. Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.063285816Z" level=info msg="runSandbox: deleting pod ID 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d from idIndex" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.063310507Z" level=info msg="runSandbox: removing pod sandbox 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.063326344Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.063338801Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.079455355Z" level=info msg="runSandbox: removing pod sandbox from storage: 47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.082449054Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.082467772Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=a7d71c40-9ecb-487a-abc2-7f0ed44af41f name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:16.082682 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:55:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:16.082732 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:55:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:16.082754 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:55:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:16.082797 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:55:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:16.995645 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.995937800Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:16 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:16.995976249Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:17.007632273Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/92da0b2e-5532-48c5-8d2f-2412d3545a86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:17.007660747Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:17 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-47f5bf1b726e432a9f4cda068b69bf4dbe2f3b4f218ce3f4287d3935e8557a7d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:55:17 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:17.996838 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:55:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:17.997226897Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:17 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:17.997282415Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:18.011733139Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b1c25554-9f5f-47c3-b084-778a2e273108 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:18 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:18.011759452Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:24.995474 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:55:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:24.995830747Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:24.996091675Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:25.007715257Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d614fd19-319c-4bc7-8b74-783336f5e2a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:25 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:25.007734238Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:26.522278690Z" level=info msg="NetworkStart: stopping network for sandbox dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:26.522418729Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/b25cf40c-2dec-44b1-9ce4-4efd8e7bea22 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:26.522440970Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:26.522447539Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:26 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:26.522453707Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:26.996988 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:55:26 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:26.997480 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917236 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917254 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917261 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917267 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917276 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917284 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:27.917290 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:55:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:27.918651486Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b" id=07d2375c-1e6d-4cfe-ab19-fb43fc442086 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:55:27 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:27.918810582Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6ef837352bb5161bb55b4fd8eff4b3de72f5600f43f2d3adbfa8f1d9786379ce,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2037fa0130ef960eef0661e278466a67eccc1460d37f7089f021dc94dfccd52b],Size_:349932379,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=07d2375c-1e6d-4cfe-ab19-fb43fc442086 name=/runtime.v1.ImageService/ImageStatus Jan 23 17:55:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:28.140896468Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:55:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:29.996287 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:55:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:29.996622930Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:29 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:29.996660342Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:55:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:30.007717449Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/545c4126-b8f8-4b1f-bd63-c1ae7177f02e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:30 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:30.007740301Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:37.544971174Z" level=info msg="NetworkStart: stopping network for sandbox 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:37.545135392Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/da5f127a-facd-4a7e-b3a3-3b0cde0559b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:37.545159221Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:37.545165587Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:37 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:37.545171586Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:39.021063651Z" level=info msg="NetworkStart: stopping network for sandbox 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:39.021229113Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44 UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/17895415-ee5b-4ef9-904d-f05b2dd4cdba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:39.021258457Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:39.021266036Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:39.021272837Z" level=info msg="Deleting pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:40.000482 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:55:40 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:40.001789 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:43.022735347Z" level=info msg="NetworkStart: stopping network for sandbox ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:43.022880365Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/6ac72209-1b34-413a-8114-5ff9e1b86f63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:43.022903097Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:43.022909732Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:43.022915855Z" level=info msg="Deleting pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.020555448Z" level=info msg="NetworkStart: stopping network for sandbox 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.020693931Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/cfa8490b-f1c1-4cbe-a413-988f84952123 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.020715951Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.020722289Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.020728076Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584342759Z" level=info msg="NetworkStart: stopping network for sandbox 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584465677Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/4015694f-f795-40fc-94d4-cd927e2e3d88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584488729Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584495395Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584501240Z" level=info msg="Deleting pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584824081Z" level=info msg="NetworkStart: stopping network for sandbox 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.584984038Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/d35ed7ae-f529-4dd1-86e3-11d4aaeb6e25 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585013940Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585021795Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585030746Z" level=info msg="Deleting pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585024930Z" level=info msg="NetworkStart: stopping network for sandbox 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585217703Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733 UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/52d51e87-f602-4484-b984-99ad8926798b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585243827Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585252111Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.585259115Z" level=info msg="Deleting pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586246045Z" level=info msg="NetworkStart: stopping network for sandbox 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586365817Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/5fedca7c-af01-4eba-8846-f68530340158 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586390453Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586397920Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586403524Z" level=info msg="Deleting pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.586943624Z" level=info msg="NetworkStart: stopping network for sandbox 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.587067816Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1 UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/f5fec57d-f8b3-4fe9-8950-aa70d90995a1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.587090360Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.587098351Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:47.587107482Z" level=info msg="Deleting pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:50.021851609Z" level=info msg="NetworkStart: stopping network for sandbox b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:50.022000772Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/4c52b276-688d-4f70-b6c2-d10274b7afb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:50.022023554Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:50.022030700Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:50.022038023Z" level=info msg="Deleting pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:55:53.996183 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:55:53 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:55:53.996703 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:55:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:54.022341903Z" level=info msg="NetworkStart: stopping network for sandbox 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:54.022483193Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/847e2c2c-8bfd-44e7-b2ec-2a24d123e0d1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:54.022504091Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:54.022510415Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:54 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:54.022516282Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:57 hub-master-0.workload.bos2.lab sshd[195957]: Accepted publickey for core from 2600:52:7:18::11 port 46686 ssh2: ED25519 SHA256:51RsaYMAVDXjZ4ofvNlClwmCDL0eebyMyw8HOKcupS0 Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Created slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Starting User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd-logind[3052]: New session 5 of user core. -- Subject: A new session 5 has been created for user core -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 5 has been created for the user core. -- -- The leading process of the session is 195957. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Started User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Starting User Manager for UID 1000... -- Subject: Unit user@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: pam_unix(systemd-user:session): session opened for user core by (uid=0) Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Podman auto-update timer. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on Podman API Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting Create User's Volatile Files and Directories... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Started User Manager for UID 1000. -- Subject: Unit user@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[1]: Started Session 5 of user core. -- Subject: Unit session-5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-5.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting A template for running K8s workloads via podman-play-kube... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting Podman auto-update service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Starting Podman API Service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:55:57 hub-master-0.workload.bos2.lab sshd[195957]: pam_unix(sshd:session): session opened for user core by (uid=0) Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Podman API Service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196000]: time="2023-01-23T17:55:57Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[195997]: time="2023-01-23T17:55:57Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: time="2023-01-23T17:55:57Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196156]: time="2023-01-23T17:55:57Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: time="2023-01-23T17:55:57Z" level=info msg="Setting parallel job count to 337" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: time="2023-01-23T17:55:57Z" level=info msg="Using systemd socket activation to determine API endpoint" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: time="2023-01-23T17:55:57Z" level=info msg="API service listening on \"@00091\". URI: \"@00091\"" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: time="2023-01-23T17:55:57Z" level=info msg="API service listening on \"@00091\"" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196156]: time="2023-01-23T17:55:57Z" level=info msg="Setting parallel job count to 337" Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196072]: Error: open default: no such file or directory Jan 23 17:55:57 hub-master-0.workload.bos2.lab podman[196069]: Error: failed to start API service: accept unixgram @00091: accept4: operation not supported Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started podman-pause-cc5845be.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started podman-pause-0a584dc7.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: podman-kube@default.service: Main process exited, code=exited, status=125/n/a Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: podman-kube@default.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Failed to start A template for running K8s workloads via podman-play-kube. -- Subject: Unit UNIT has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has failed. -- -- The result is failed. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: podman.service: Main process exited, code=exited, status=125/n/a Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: podman.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started podman-pause-623bfaba.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started podman-pause-7ad9b4cc.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Started Podman auto-update service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:55:57 hub-master-0.workload.bos2.lab systemd[195982]: Startup finished in 497ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 1000 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 497846 microseconds. Jan 23 17:55:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:58.146342611Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:55:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:59.021685984Z" level=info msg="NetworkStart: stopping network for sandbox da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:55:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:59.021914788Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/4955f22f-58bc-4e02-a776-4eff5a01d9a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:55:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:59.021936882Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:55:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:59.021944427Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:55:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:55:59.021951173Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:55:59 hub-master-0.workload.bos2.lab sshd[196001]: Received disconnect from 2600:52:7:18::11 port 46686:11: disconnected by user Jan 23 17:55:59 hub-master-0.workload.bos2.lab sshd[196001]: Disconnected from user core 2600:52:7:18::11 port 46686 Jan 23 17:55:59 hub-master-0.workload.bos2.lab sshd[195957]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:59 hub-master-0.workload.bos2.lab systemd[1]: session-5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-5.scope has successfully entered the 'dead' state. Jan 23 17:55:59 hub-master-0.workload.bos2.lab systemd-logind[3052]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:55:59 hub-master-0.workload.bos2.lab systemd[1]: session-5.scope: Consumed 71ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-5.scope completed and consumed the indicated resources. Jan 23 17:55:59 hub-master-0.workload.bos2.lab systemd-logind[3052]: Removed session 5. -- Subject: Session 5 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 5 has been terminated. Jan 23 17:56:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:01.021392907Z" level=info msg="NetworkStart: stopping network for sandbox ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:01.021564902Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14 UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/8cea57e0-6a26-4c90-be0e-0f9ecd5fe42a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:01.021591007Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:01.021598958Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:01.021606187Z" level=info msg="Deleting pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:02.021091027Z" level=info msg="NetworkStart: stopping network for sandbox d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:02.021248561Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757 UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/92da0b2e-5532-48c5-8d2f-2412d3545a86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:02.021274297Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:02.021281165Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:02.021288594Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:03.025535241Z" level=info msg="NetworkStart: stopping network for sandbox 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:03.025682871Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291 UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/b1c25554-9f5f-47c3-b084-778a2e273108 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:03.025706225Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:03.025712216Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:03 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:03.025719127Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:05.996406 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:56:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:05.997187 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1199] policy: auto-activating connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1205] device (eno12409): Activation: starting connection 'Wired Connection' (99853833-baac-4bca-8508-0bff9efdaf37) Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1205] device (eno12409): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1207] device (eno12409): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1211] device (eno12409): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jan 23 17:56:08 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496568.1215] dhcp4 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[1]: Stopping User Manager for UID 1000... -- Subject: Unit user@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopping Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopping podman-pause-7ad9b4cc.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped podman-pause-7ad9b4cc.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab sh[196552]: time="2023-01-23T17:56:09Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:56:09 hub-master-0.workload.bos2.lab sh[196552]: Error: you must provide at least one name or id Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: podman-restart.service: Control process exited, code=exited status=125 Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: podman-restart.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed Podman API Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Closed GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Stopped Podman auto-update timer. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195982]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[195984]: pam_unix(systemd-user:session): session closed for user core Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service has successfully entered the 'dead' state. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[1]: Stopped User Manager for UID 1000. -- Subject: Unit user@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished shutting down. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[1]: user@1000.service: Consumed 1.013s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@1000.service completed and consumed the indicated resources. Jan 23 17:56:09 hub-master-0.workload.bos2.lab systemd[1]: Stopping User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun shutting down. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: run-user-1000.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-1000.mount has successfully entered the 'dead' state. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service has successfully entered the 'dead' state. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: Stopped User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished shutting down. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: user-runtime-dir@1000.service: Consumed 3ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@1000.service completed and consumed the indicated resources. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: Removed slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished shutting down. Jan 23 17:56:10 hub-master-0.workload.bos2.lab systemd[1]: user-1000.slice: Consumed 1.093s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-1000.slice completed and consumed the indicated resources. Jan 23 17:56:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:10.020906838Z" level=info msg="NetworkStart: stopping network for sandbox faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:10.021330975Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/d614fd19-319c-4bc7-8b74-783336f5e2a0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:10.021355278Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:10.021362152Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:10 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:10.021368696Z" level=info msg="Deleting pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:10 hub-master-0.workload.bos2.lab NetworkManager[3328]: [1674496570.0338] dhcp6 (eno12409): activation: beginning transaction (timeout in 90 seconds) Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.533903445Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58): error removing pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.533941856Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount has successfully entered the 'dead' state. Jan 23 17:56:11 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount has successfully entered the 'dead' state. Jan 23 17:56:11 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b25cf40c\x2d2dec\x2d44b1\x2d9ce4\x2d4efd8e7bea22.mount has successfully entered the 'dead' state. Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.578348408Z" level=info msg="runSandbox: deleting pod ID dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58 from idIndex" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.578379842Z" level=info msg="runSandbox: removing pod sandbox dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.578395604Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.578434938Z" level=info msg="runSandbox: unmounting shmPath for sandbox dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.594436676Z" level=info msg="runSandbox: removing pod sandbox from storage: dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.597367348Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.597385589Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0" id=054d1e0e-72ab-471f-bc31-2cfe7962e649 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:11.597623 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:11.597888 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:11.597912 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:11.597963 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(4118bc95-e963-4fc7-bb2e-ceda3fe6f298)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_4118bc95-e963-4fc7-bb2e-ceda3fe6f298_0(dca3c16afb55dcd6e7b89f9bc1c7b8567288298edb772a317eb92435fa1ffa58): error adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/4118bc95-e963-4fc7-bb2e-ceda3fe6f298]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" podUID=4118bc95-e963-4fc7-bb2e-ceda3fe6f298 Jan 23 17:56:11 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:11.668592 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.668913246Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-11-hub-master-0.workload.bos2.lab/POD" id=baa7b8cf-6e58-4c07-865e-dca34292ae62 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.668945017Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.680305410Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2dfc90ef95cd32a4a574ec95035b6bb8493f123c8ed9390feb6518338fc527f9 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/de25fa00-e4dd-40b8-b926-67301da68166 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:11 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:11.680325097Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:15.022667885Z" level=info msg="NetworkStart: stopping network for sandbox 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:15.023015337Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/545c4126-b8f8-4b1f-bd63-c1ae7177f02e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:15.023038932Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:15.023046851Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:15 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:15.023055070Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:16.996985 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:56:16 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:16.997496 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:56:21 hub-master-0.workload.bos2.lab conmon[178369]: conmon 77cd36a56cbf09e9bade : container 178382 exited with status 1 Jan 23 17:56:21 hub-master-0.workload.bos2.lab systemd[1]: crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has successfully entered the 'dead' state. Jan 23 17:56:21 hub-master-0.workload.bos2.lab systemd[1]: crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope: Consumed 3.702s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope completed and consumed the indicated resources. Jan 23 17:56:21 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope has successfully entered the 'dead' state. Jan 23 17:56:21 hub-master-0.workload.bos2.lab systemd[1]: crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope: Consumed 53ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit crio-conmon-77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f.scope completed and consumed the indicated resources. Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.556835686Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1): error removing pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.556871931Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount has successfully entered the 'dead' state. Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount has successfully entered the 'dead' state. Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-netns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-da5f127a\x2dfacd\x2d4a7e\x2db3a3\x2d3b0cde0559b5.mount has successfully entered the 'dead' state. Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.599350353Z" level=info msg="runSandbox: deleting pod ID 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1 from idIndex" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.599376697Z" level=info msg="runSandbox: removing pod sandbox 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.599395971Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.599410540Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.615453068Z" level=info msg="runSandbox: removing pod sandbox from storage: 6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.618395820Z" level=info msg="runSandbox: releasing container name: k8s_POD_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.618413523Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0" id=eba7463d-9f15-4907-8162-10759383d487 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:22.618629 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:22.618823 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:22.618846 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:22.618898 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf374316-9255-4614-af0e-15402ae67a30)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf374316-9255-4614-af0e-15402ae67a30_0(6f4c5eccaaa0d2378b2916b11677d7a93a89c02e66528c36b97607bf956bf4b1): error adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/bf374316-9255-4614-af0e-15402ae67a30]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" podUID=bf374316-9255-4614-af0e-15402ae67a30 Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:22.689847 8631 generic.go:296] "Generic (PLEG): container finished" podID=b6c2cdc5-967e-4062-b6e6-f6cf372cc21c containerID="77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f" exitCode=1 Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:22.689931 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerDied Data:77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f} Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:22.689962 8631 scope.go:115] "RemoveContainer" containerID="9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f" Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:22.690080 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab" Jan 23 17:56:22 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:22.690279 8631 scope.go:115] "RemoveContainer" containerID="77cd36a56cbf09e9bade7aa4b977d58f6cfb4a9d1323d0a11ae4d03ceec4d16f" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.690490253Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/installer-11-hub-master-0.workload.bos2.lab/POD" id=78a30dce-ade9-4ba3-8cbd-b423ba36156c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.690517286Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.690650113Z" level=info msg="Removing container: 9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f" id=491b9ef0-7b0e-4bfc-9f12-fbabd9b82c27 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.690799875Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=0374fa1b-950d-4b0f-b3db-eab6f141b37b name=/runtime.v1.ImageService/ImageStatus Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.690924286Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0374fa1b-950d-4b0f-b3db-eab6f141b37b name=/runtime.v1.ImageService/ImageStatus Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.691548897Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" id=cb5b6774-18f6-40f7-ac38-4a1c3cae3ffb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.691704618Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ec008568b0a61a5020d14f73278c7d0bc46935e8ba878d1e7687343a3e7fb88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0],Size_:487525318,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cb5b6774-18f6-40f7-ac38-4a1c3cae3ffb name=/runtime.v1.ImageService/ImageStatus Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.692304212Z" level=info msg="Creating container: openshift-multus/multus-cdt6c/kube-multus" id=a761d9e7-f903-4e7a-9ac6-f366adf3d6f3 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.692367691Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.701430088Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fba9b83270edd990cf7820d45abcb6731cc8d2c24111f9842e5a35e70d5a9d13 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/a52eedf0-3348-402e-8bc9-c0e27a22f18e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.701449574Z" level=info msg="Adding pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.734469977Z" level=info msg="Removed container 9e98c79ccd8ab7067bcf12cf2fb309f3707ed025444b3e2ac3b3cf22bf50fc7f: openshift-multus/multus-cdt6c/kube-multus" id=491b9ef0-7b0e-4bfc-9f12-fbabd9b82c27 name=/runtime.v1.RuntimeService/RemoveContainer Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: Started crio-conmon-7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0.scope. -- Subject: Unit crio-conmon-7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-conmon-7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:56:22 hub-master-0.workload.bos2.lab systemd[1]: Started libcontainer container 7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0. -- Subject: Unit crio-7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit crio-7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.848726517Z" level=info msg="Created container 7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0: openshift-multus/multus-cdt6c/kube-multus" id=a761d9e7-f903-4e7a-9ac6-f366adf3d6f3 name=/runtime.v1.RuntimeService/CreateContainer Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.849150618Z" level=info msg="Starting container: 7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0" id=3b473295-87ed-4ab4-b68d-1fa8fc3f3d0e name=/runtime.v1.RuntimeService/StartContainer Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.867041774Z" level=info msg="Started container" PID=197061 containerID=7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0 description=openshift-multus/multus-cdt6c/kube-multus id=3b473295-87ed-4ab4-b68d-1fa8fc3f3d0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd49a513a7120e7101cb9455b8dd7a3b0553ea56a00fe9aa3d0d9cb9870a7f8 Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.871775363Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_1af1b288-74bd-40a1-8dce-71b075e89ce6\"" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.882356099Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.882374309Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.893730471Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.903600131Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.903617521Z" level=info msg="Updated default CNI network name to multus-cni-network" Jan 23 17:56:22 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:22.903626867Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_1af1b288-74bd-40a1-8dce-71b075e89ce6\"" Jan 23 17:56:23 hub-master-0.workload.bos2.lab systemd[1]: var-lib-containers-storage-overlay-074d883b51932fe3c5f429ac15b6a753b445cfe0584ddd1af3a8206ce9649c64-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-074d883b51932fe3c5f429ac15b6a753b445cfe0584ddd1af3a8206ce9649c64-merged.mount has successfully entered the 'dead' state. Jan 23 17:56:23 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:23.693989 8631 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cdt6c" event=&{ID:b6c2cdc5-967e-4062-b6e6-f6cf372cc21c Type:ContainerStarted Data:7a1c676a8e11fc197b9f1acf6f8d9e0d1909daa86786bb4c47fa92f86282acd0} Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.032753120Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44): error removing pod openshift-dns_dns-default-srzv5 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.032972642Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount has successfully entered the 'dead' state. Jan 23 17:56:24 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount has successfully entered the 'dead' state. Jan 23 17:56:24 hub-master-0.workload.bos2.lab systemd[1]: run-netns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-17895415\x2dee5b\x2d4ef9\x2d904d\x2df05b2dd4cdba.mount has successfully entered the 'dead' state. Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.075298474Z" level=info msg="runSandbox: deleting pod ID 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44 from idIndex" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.075323370Z" level=info msg="runSandbox: removing pod sandbox 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.075336886Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.075348174Z" level=info msg="runSandbox: unmounting shmPath for sandbox 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.088422324Z" level=info msg="runSandbox: removing pod sandbox from storage: 11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.091979920Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:24.091996900Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0" id=ea29730f-d73c-46ee-bad3-57405681acd0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:24.092104 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:24.092145 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:56:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:24.092165 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44): error adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-srzv5" Jan 23 17:56:24 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:24.092210 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-srzv5_openshift-dns(3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-srzv5_openshift-dns_3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e_0(11b57a366cda06f4bb26aa91961b4c70429e42139d42bbead9301b6659726f44): error adding pod openshift-dns_dns-default-srzv5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-srzv5/3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-srzv5" podUID=3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917597 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917618 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917628 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917635 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917645 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917653 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:27 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:27.917659 8631 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-hub-master-0.workload.bos2.lab" status=Running Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.033982905Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f): error removing pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.034020459Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount has successfully entered the 'dead' state. Jan 23 17:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount has successfully entered the 'dead' state. Jan 23 17:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-netns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-6ac72209\x2d1b34\x2d413a\x2d8114\x2d5ff9e1b86f63.mount has successfully entered the 'dead' state. Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.067282115Z" level=info msg="runSandbox: deleting pod ID ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f from idIndex" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.067309538Z" level=info msg="runSandbox: removing pod sandbox ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.067323984Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.067336755Z" level=info msg="runSandbox: unmounting shmPath for sandbox ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.079447697Z" level=info msg="runSandbox: removing pod sandbox from storage: ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.082862998Z" level=info msg="runSandbox: releasing container name: k8s_POD_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.082881633Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0" id=3c14dcb7-2d9e-4277-98cb-8a764705585c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:28.083099 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:28.083149 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:28.083174 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:28 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:28.083230 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler(7cca1a4c-e8cc-4938-9e14-a4d8d979ad14)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab_openshift-kube-scheduler_7cca1a4c-e8cc-4938-9e14-a4d8d979ad14_0(ecc95ac623a22dba10269bbc6b9475cad3f812f5586e6439ca166bf3322a2c9f): error adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/7cca1a4c-e8cc-4938-9e14-a4d8d979ad14]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" podUID=7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 Jan 23 17:56:28 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:28.143213248Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:29.997048 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:56:29 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:29.997674 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.031008255Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226): error removing pod openshift-ingress-canary_ingress-canary-7v8f9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.031049403Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-netns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-cfa8490b\x2df1c1\x2d4cbe\x2da413\x2d988f84952123.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.071282861Z" level=info msg="runSandbox: deleting pod ID 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226 from idIndex" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.071310974Z" level=info msg="runSandbox: removing pod sandbox 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.071325754Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.071337084Z" level=info msg="runSandbox: unmounting shmPath for sandbox 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.087460918Z" level=info msg="runSandbox: removing pod sandbox from storage: 135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.093971842Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.094000738Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0" id=d54730c5-4501-40bf-bb55-f835e8ea4d73 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.094271 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.094324 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.094350 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.094400 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-7v8f9_openshift-ingress-canary(0dd28320-8b9c-4b86-baca-8c1d561a962c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-7v8f9_openshift-ingress-canary_0dd28320-8b9c-4b86-baca-8c1d561a962c_0(135a72b3e4b00a86c123081b830af78a40f0b39ca64a43b73e4dde169d021226): error adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-7v8f9/0dd28320-8b9c-4b86-baca-8c1d561a962c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-7v8f9" podUID=0dd28320-8b9c-4b86-baca-8c1d561a962c Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.595846579Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1): error removing pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.595889820Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.596516902Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a): error removing pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.596568160Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.596526557Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733): error removing pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.596760050Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.597253018Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1): error removing pod openshift-apiserver_apiserver-746c4bf98c-9x4mg from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.597312808Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.598301647Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1): error removing pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.598340633Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount has successfully entered the 'dead' state. Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.634307341Z" level=info msg="runSandbox: deleting pod ID 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1 from idIndex" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.634334503Z" level=info msg="runSandbox: removing pod sandbox 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.634349680Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.634361857Z" level=info msg="runSandbox: unmounting shmPath for sandbox 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637286233Z" level=info msg="runSandbox: deleting pod ID 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1 from idIndex" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637304690Z" level=info msg="runSandbox: deleting pod ID 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733 from idIndex" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637330170Z" level=info msg="runSandbox: removing pod sandbox 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637342255Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637353632Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637314855Z" level=info msg="runSandbox: removing pod sandbox 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637404158Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637418627Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637430595Z" level=info msg="runSandbox: deleting pod ID 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a from idIndex" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637457478Z" level=info msg="runSandbox: removing pod sandbox 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637481267Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.637496809Z" level=info msg="runSandbox: unmounting shmPath for sandbox 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.638269720Z" level=info msg="runSandbox: deleting pod ID 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1 from idIndex" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.638293045Z" level=info msg="runSandbox: removing pod sandbox 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.638305271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.638317133Z" level=info msg="runSandbox: unmounting shmPath for sandbox 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.649450591Z" level=info msg="runSandbox: removing pod sandbox from storage: 26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.649453807Z" level=info msg="runSandbox: removing pod sandbox from storage: 579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.653116284Z" level=info msg="runSandbox: releasing container name: k8s_POD_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.653136306Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0" id=b4350770-7154-40a2-8cce-45b8fc0e97da name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.653443194Z" level=info msg="runSandbox: removing pod sandbox from storage: 4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.653451045Z" level=info msg="runSandbox: removing pod sandbox from storage: 568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.653506 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.653561 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.653585 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.653638 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-868d5f6bf8-svlxj_openshift-authentication(69794e08-d62b-401c-8dea-a730bf37256a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-868d5f6bf8-svlxj_openshift-authentication_69794e08-d62b-401c-8dea-a730bf37256a_0(26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a): error adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/69794e08-d62b-401c-8dea-a730bf37256a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" podUID=69794e08-d62b-401c-8dea-a730bf37256a Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.653596554Z" level=info msg="runSandbox: removing pod sandbox from storage: 1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.656979132Z" level=info msg="runSandbox: releasing container name: k8s_POD_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.657000168Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0" id=722471f5-c2b6-4b88-bf4b-4a833f1f8779 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.657135 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.657165 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.657187 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.657232 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager(1886664c-cb49-48f7-b263-eff19ad90869)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5fdd49db4f-5q9jh_openshift-route-controller-manager_1886664c-cb49-48f7-b263-eff19ad90869_0(579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1): error adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/1886664c-cb49-48f7-b263-eff19ad90869]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" podUID=1886664c-cb49-48f7-b263-eff19ad90869 Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.660492793Z" level=info msg="runSandbox: releasing container name: k8s_POD_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.660516680Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0" id=20a932c9-f99f-4a2e-a707-8f3b2cb416fd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.660694 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.660727 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.660748 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.660789 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-876b6ffdf-x4gbg_openshift-controller-manager(f6df27f7-bd15-488a-8ec8-6a52e1a72ddd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-876b6ffdf-x4gbg_openshift-controller-manager_f6df27f7-bd15-488a-8ec8-6a52e1a72ddd_0(4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733): error adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/f6df27f7-bd15-488a-8ec8-6a52e1a72ddd]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" podUID=f6df27f7-bd15-488a-8ec8-6a52e1a72ddd Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.663634888Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.663655973Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0" id=8ae80159-eea2-43ce-8290-8f48292c27b3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.663891 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.663922 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.663945 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.663983 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver(b68fa2a4-e557-4154-b0c2-64f449cfd597)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-86c7cf6467-bbxls_openshift-oauth-apiserver_b68fa2a4-e557-4154-b0c2-64f449cfd597_0(568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1): error adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/b68fa2a4-e557-4154-b0c2-64f449cfd597]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" podUID=b68fa2a4-e557-4154-b0c2-64f449cfd597 Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.666638770Z" level=info msg="runSandbox: releasing container name: k8s_POD_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.666656016Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0" id=b99150a9-d270-4af1-b471-25f170480aae name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.666847 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.666879 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.666900 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:32.666938 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"apiserver-746c4bf98c-9x4mg_openshift-apiserver(43afcd6c-e482-449b-986d-bd52ed16ad2b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-746c4bf98c-9x4mg_openshift-apiserver_43afcd6c-e482-449b-986d-bd52ed16ad2b_0(1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1): error adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-apiserver/apiserver-746c4bf98c-9x4mg/43afcd6c-e482-449b-986d-bd52ed16ad2b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" podUID=43afcd6c-e482-449b-986d-bd52ed16ad2b Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:32.709015 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868d5f6bf8-svlxj" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:32.709102 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:32.709250 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-746c4bf98c-9x4mg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709299442Z" level=info msg="Running pod sandbox: openshift-authentication/oauth-openshift-868d5f6bf8-svlxj/POD" id=6101d252-17bf-42d7-a360-f1080c372d50 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709337248Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709410582Z" level=info msg="Running pod sandbox: openshift-oauth-apiserver/apiserver-86c7cf6467-bbxls/POD" id=29d32261-c3d5-4422-a10c-de2e2ab096d9 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:32.709414 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-876b6ffdf-x4gbg" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709440252Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:32 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:32.709500 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709543847Z" level=info msg="Running pod sandbox: openshift-apiserver/apiserver-746c4bf98c-9x4mg/POD" id=d3a7badb-f5d6-4c69-86ec-db4aa1d0b23d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709569014Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709685420Z" level=info msg="Running pod sandbox: openshift-controller-manager/controller-manager-876b6ffdf-x4gbg/POD" id=667a68f3-1743-484c-af70-85d6bd0a153e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709721274Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709688227Z" level=info msg="Running pod sandbox: openshift-route-controller-manager/route-controller-manager-5fdd49db4f-5q9jh/POD" id=fff05243-9a5e-4844-b105-f67111b46a14 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.709781493Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.734291025Z" level=info msg="Got pod network &{Name:oauth-openshift-868d5f6bf8-svlxj Namespace:openshift-authentication ID:d5addaac22c2625f82e711845e3ef93179d0dc3ae914faef861929ef8849f8de UID:69794e08-d62b-401c-8dea-a730bf37256a NetNS:/var/run/netns/a7432928-f996-4cb8-9aad-30e6058d918d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.734311932Z" level=info msg="Adding pod openshift-authentication_oauth-openshift-868d5f6bf8-svlxj to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.735913036Z" level=info msg="Got pod network &{Name:apiserver-746c4bf98c-9x4mg Namespace:openshift-apiserver ID:f79de1d1197323067138a2c9d062c2751fbfb4ad9ee82c865b7941a0eda406b8 UID:43afcd6c-e482-449b-986d-bd52ed16ad2b NetNS:/var/run/netns/ca4a7ee5-1f98-4716-a6da-8164bb079113 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.735936951Z" level=info msg="Adding pod openshift-apiserver_apiserver-746c4bf98c-9x4mg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.736732845Z" level=info msg="Got pod network &{Name:route-controller-manager-5fdd49db4f-5q9jh Namespace:openshift-route-controller-manager ID:297b895c8e7dd43395b9e5aa19fa08437d6971c2ea5215750f58f09b351845a5 UID:1886664c-cb49-48f7-b263-eff19ad90869 NetNS:/var/run/netns/fd05bca0-0c34-4b39-9eba-65c999d61b6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.736751601Z" level=info msg="Adding pod openshift-route-controller-manager_route-controller-manager-5fdd49db4f-5q9jh to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.737918939Z" level=info msg="Got pod network &{Name:controller-manager-876b6ffdf-x4gbg Namespace:openshift-controller-manager ID:2a1ec0fa6ffbf87832cab8afa34c30b01d318ad3cb905de6031451445e56b4fd UID:f6df27f7-bd15-488a-8ec8-6a52e1a72ddd NetNS:/var/run/netns/da87f0be-b5e8-4908-bf90-b8d69634d8a9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.737936935Z" level=info msg="Adding pod openshift-controller-manager_controller-manager-876b6ffdf-x4gbg to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.739863755Z" level=info msg="Got pod network &{Name:apiserver-86c7cf6467-bbxls Namespace:openshift-oauth-apiserver ID:a3db2caa293a1a97d7edba871233fef5751482528a751d935c9e082d4b36921f UID:b68fa2a4-e557-4154-b0c2-64f449cfd597 NetNS:/var/run/netns/420aae58-995e-4d10-a6a4-5652792259b7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:32 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:32.739892220Z" level=info msg="Adding pod openshift-oauth-apiserver_apiserver-86c7cf6467-bbxls to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-f5fec57d\x2df8b3\x2d4fe9\x2d8950\x2daa70d90995a1.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-52d51e87\x2df602\x2d4484\x2db984\x2d99ad8926798b.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-5fedca7c\x2daf01\x2d4eba\x2d8846\x2df68530340158.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d35ed7ae\x2df529\x2d4dd1\x2d86e3\x2d11d4aaeb6e25.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4015694f\x2df795\x2d40fc\x2d94d4\x2dcd927e2e3d88.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-26d6bda3b6b33cd19e52c263935774928adc8e29ce37ed97f147d1fa96300c0a-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-568e11843685d864f8c45f945685fb3d6419791da2812644c02accbb2ce9e3a1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-1ecf22b749d5a5496b3ed58eb9e06fcc1b5c13f4cdb2de913bf67eb2293b7bb1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4c383b664a89ff044fac6b4f223deb249839e50e82f421cc41b54566bce7a733-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:33 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-579380dedab11f7c25de88b5e154ec351634e64735f9c578d9324626a28296a1-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.033621186Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d): error removing pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.033911790Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount has successfully entered the 'dead' state. Jan 23 17:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount has successfully entered the 'dead' state. Jan 23 17:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4c52b276\x2d688d\x2d4f70\x2db6c2\x2dd10274b7afb7.mount has successfully entered the 'dead' state. Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.076305464Z" level=info msg="runSandbox: deleting pod ID b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d from idIndex" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.076330189Z" level=info msg="runSandbox: removing pod sandbox b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.076343711Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.076356529Z" level=info msg="runSandbox: unmounting shmPath for sandbox b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.096464031Z" level=info msg="runSandbox: removing pod sandbox from storage: b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.099370007Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.099388390Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0" id=fa07d91a-b7a3-4a5f-9bdc-b9cfda86d8ff name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:35.099620 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:35.099673 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:35.099697 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:35.099749 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager(2284ac10-60cf-4768-bd24-3ea63b730ce6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-guard-hub-master-0.workload.bos2.lab_openshift-kube-controller-manager_2284ac10-60cf-4768-bd24-3ea63b730ce6_0(b2ee0dfd0d3b336892e46fea60a581f09044e858f0dc7e2862dab2fa20513a9d): error adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/2284ac10-60cf-4768-bd24-3ea63b730ce6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" podUID=2284ac10-60cf-4768-bd24-3ea63b730ce6 Jan 23 17:56:35 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:35.996548 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srzv5" Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.997145290Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-srzv5/POD" id=eaad8915-0ca6-4549-b4e9-2e079276e203 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:35 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:35.997187321Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:36.008773705Z" level=info msg="Got pod network &{Name:dns-default-srzv5 Namespace:openshift-dns ID:5be96eb690b27bccb5f2713af19f641ba5c233b7643788e726058daf2bda73fd UID:3a8bb7cc-95f9-45d8-bb73-c7ddcdcbc28e NetNS:/var/run/netns/e0e8e7ee-8c3d-4621-bea9-a897412b90e6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:36 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:36.008796066Z" level=info msg="Adding pod openshift-dns_dns-default-srzv5 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:36 hub-master-0.workload.bos2.lab sshd[197584]: Accepted publickey for core from 2600:52:7:18::11 port 60412 ssh2: ED25519 SHA256:51RsaYMAVDXjZ4ofvNlClwmCDL0eebyMyw8HOKcupS0 Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Created slice User Slice of UID 1000. -- Subject: Unit user-1000.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-1000.slice has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Starting User runtime directory /run/user/1000... -- Subject: Unit user-runtime-dir@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd-logind[3052]: New session 7 of user core. -- Subject: A new session 7 has been created for user core -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 7 has been created for the user core. -- -- The leading process of the session is 197584. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Started User runtime directory /run/user/1000. -- Subject: Unit user-runtime-dir@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Starting User Manager for UID 1000... -- Subject: Unit user@1000.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: pam_unix(systemd-user:session): session opened for user core by (uid=0) Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on GnuPG network certificate management daemon. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on GnuPG cryptographic agent and passphrase cache (restricted). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting Create User's Volatile Files and Directories... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Podman auto-update timer. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on GnuPG cryptographic agent (ssh-agent emulation). -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Daily Cleanup of User's Temporary Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on Podman API Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on GnuPG cryptographic agent and passphrase cache. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Create User's Volatile Files and Directories. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Started User Manager for UID 1000. -- Subject: Unit user@1000.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@1000.service has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting Podman auto-update service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[1]: Started Session 7 of user core. -- Subject: Unit session-7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-7.scope has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting Podman Start All Containers With Restart Policy Set To Always... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting A template for running K8s workloads via podman-play-kube... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab sshd[197584]: pam_unix(sshd:session): session opened for user core by (uid=0) Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Starting Podman API Service... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Podman API Service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197632]: time="2023-01-23T17:56:36Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197630]: time="2023-01-23T17:56:36Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: time="2023-01-23T17:56:36Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197704]: time="2023-01-23T17:56:36Z" level=info msg="/usr/bin/podman filtering at log level info" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197704]: time="2023-01-23T17:56:36Z" level=info msg="Setting parallel job count to 337" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197707]: Error: open default: no such file or directory Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: time="2023-01-23T17:56:36Z" level=info msg="Setting parallel job count to 337" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: time="2023-01-23T17:56:36Z" level=info msg="Using systemd socket activation to determine API endpoint" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: time="2023-01-23T17:56:36Z" level=info msg="API service listening on \"@0009e\". URI: \"@0009e\"" Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: time="2023-01-23T17:56:36Z" level=info msg="API service listening on \"@0009e\"" Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab podman[197703]: Error: failed to start API service: accept unixgram @0009e: accept4: operation not supported Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started podman-pause-b7b43cab.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started podman-pause-6e22a8fa.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: podman.service: Main process exited, code=exited, status=125/n/a Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: podman.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: podman-kube@default.service: Main process exited, code=exited, status=125/n/a Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: podman-kube@default.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit UNIT has entered the 'failed' state with result 'exit-code'. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Failed to start A template for running K8s workloads via podman-play-kube. -- Subject: Unit UNIT has failed -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has failed. -- -- The result is failed. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started podman-pause-f8424f5c.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Podman Start All Containers With Restart Policy Set To Always. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started podman-pause-e90fcff4.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Started Podman auto-update service. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jan 23 17:56:36 hub-master-0.workload.bos2.lab systemd[197615]: Startup finished in 513ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 1000 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 513184 microseconds. Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.034361486Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b): error removing pod openshift-multus_network-metrics-daemon-dzwx9 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.034745149Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount has successfully entered the 'dead' state. Jan 23 17:56:39 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount has successfully entered the 'dead' state. Jan 23 17:56:39 hub-master-0.workload.bos2.lab systemd[1]: run-netns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-847e2c2c\x2d8bfd\x2d44e7\x2db2ec\x2d2a24d123e0d1.mount has successfully entered the 'dead' state. Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.072397400Z" level=info msg="runSandbox: deleting pod ID 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b from idIndex" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.072427571Z" level=info msg="runSandbox: removing pod sandbox 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.072441114Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.072454512Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.080431896Z" level=info msg="runSandbox: removing pod sandbox from storage: 4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.083919255Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:39.083938154Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0" id=e43a5814-e73f-4c3f-8bf2-378fdaa4093b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:39.084137 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:39.084318 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:56:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:39.084341 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:56:39 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:39.084386 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-dzwx9_openshift-multus(fc516524-2ee1-45e5-8b33-0266acf098d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-dzwx9_openshift-multus_fc516524-2ee1-45e5-8b33-0266acf098d1_0(4e2f45ca98f6ccdc1fd13df5a9b4dec76717fddbe40ba9beaaee15a8a658037b): error adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-dzwx9/fc516524-2ee1-45e5-8b33-0266acf098d1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-dzwx9" podUID=fc516524-2ee1-45e5-8b33-0266acf098d1 Jan 23 17:56:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:41.996410 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:56:41 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:41.997106 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:56:42 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:42.995571 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:42.995909889Z" level=info msg="Running pod sandbox: openshift-kube-scheduler/openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab/POD" id=9dc1cca8-e300-48cc-a991-ae57ce923ce5 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:42 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:42.995949969Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:43.008330039Z" level=info msg="Got pod network &{Name:openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-scheduler ID:6ff353244da4be5ede1fdb631b8441bf2809ff7783a4e711a3d3c735338bf84d UID:7cca1a4c-e8cc-4938-9e14-a4d8d979ad14 NetNS:/var/run/netns/a2604a9e-2606-44a7-8817-6d0d4c542381 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:43.008351186Z" level=info msg="Adding pod openshift-kube-scheduler_openshift-kube-scheduler-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:43 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:43.996000 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7v8f9" Jan 23 17:56:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:43.996396316Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-7v8f9/POD" id=e18ff3fc-6e9a-48c2-8776-073c387f56f7 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:43 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:43.996434030Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.007627009Z" level=info msg="Got pod network &{Name:ingress-canary-7v8f9 Namespace:openshift-ingress-canary ID:6c812afa84d4e79df5002079db15fbb1e77abdc5ef6883f49d27b8b8af7b5ae1 UID:0dd28320-8b9c-4b86-baca-8c1d561a962c NetNS:/var/run/netns/eb97f15d-21a4-43ad-9424-af170419e619 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.007648662Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-7v8f9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.034521018Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde): error removing pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.034549069Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount has successfully entered the 'dead' state. Jan 23 17:56:44 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount has successfully entered the 'dead' state. Jan 23 17:56:44 hub-master-0.workload.bos2.lab systemd[1]: run-netns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-4955f22f\x2d58bc\x2d4e02\x2da776\x2d4eff5a01d9a4.mount has successfully entered the 'dead' state. Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.083304440Z" level=info msg="runSandbox: deleting pod ID da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde from idIndex" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.083327139Z" level=info msg="runSandbox: removing pod sandbox da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.083339980Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.083351292Z" level=info msg="runSandbox: unmounting shmPath for sandbox da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.103465641Z" level=info msg="runSandbox: removing pod sandbox from storage: da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.106259311Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:44.106278137Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0" id=31d222ca-7f10-4e26-970a-62d1486bb52e name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:44.106511 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:44.106555 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:56:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:44.106578 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:56:44 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:44.106634 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(bf9abfd8-f6ab-41d0-9984-1c374f00d734)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_bf9abfd8-f6ab-41d0-9984-1c374f00d734_0(da27a0130b5fe1fd7621733d1ad9b9c002fbe8fa0775ad0af94668eae4d24cde): error adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/bf9abfd8-f6ab-41d0-9984-1c374f00d734]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" podUID=bf9abfd8-f6ab-41d0-9984-1c374f00d734 Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.033353168Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14): error removing pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.033536049Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount has successfully entered the 'dead' state. Jan 23 17:56:46 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount has successfully entered the 'dead' state. Jan 23 17:56:46 hub-master-0.workload.bos2.lab systemd[1]: run-netns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-8cea57e0\x2d6a26\x2d4c90\x2dbe0e\x2d0f9ecd5fe42a.mount has successfully entered the 'dead' state. Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.088405606Z" level=info msg="runSandbox: deleting pod ID ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14 from idIndex" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.088438731Z" level=info msg="runSandbox: removing pod sandbox ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.088454236Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.088476141Z" level=info msg="runSandbox: unmounting shmPath for sandbox ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.104432317Z" level=info msg="runSandbox: removing pod sandbox from storage: ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.107832452Z" level=info msg="runSandbox: releasing container name: k8s_POD_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:46.107852328Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0" id=1cfb2eaf-1236-4f9a-8405-33b135aeb492 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:46.108055 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:46.108105 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:46.108128 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:46 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:46.108177 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(16c1efa7-495c-45d5-b9c1-00d078cb4114)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-guard-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_16c1efa7-495c-45d5-b9c1-00d078cb4114_0(ad025f1c154c86ae982ce816b00a0ed1da4f3f314759fde0891bae0175facc14): error adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/16c1efa7-495c-45d5-b9c1-00d078cb4114]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" podUID=16c1efa7-495c-45d5-b9c1-00d078cb4114 Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.032495881Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757): error removing pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.032534199Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount has successfully entered the 'dead' state. Jan 23 17:56:47 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount has successfully entered the 'dead' state. Jan 23 17:56:47 hub-master-0.workload.bos2.lab systemd[1]: run-netns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-92da0b2e\x2d5532\x2d48c5\x2d8d2f\x2d2412d3545a86.mount has successfully entered the 'dead' state. Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.076276932Z" level=info msg="runSandbox: deleting pod ID d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757 from idIndex" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.076300514Z" level=info msg="runSandbox: removing pod sandbox d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.076314490Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.076325496Z" level=info msg="runSandbox: unmounting shmPath for sandbox d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.096458087Z" level=info msg="runSandbox: removing pod sandbox from storage: d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.099903307Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:47.099921172Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0" id=75965aac-e937-49b3-b8d3-ff75688e09dd name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:47.100155 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:47.100200 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:56:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:47.100230 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:56:47 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:47.100280 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50_0(d7b9a6db1a33668ff463a03609b9e6a9c3e4c59fc761d6426db9a210e2016757): error adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" podUID=2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.037313709Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291): error removing pod openshift-network-diagnostics_network-check-target-qs9w4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.037352867Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount has successfully entered the 'dead' state. Jan 23 17:56:48 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount has successfully entered the 'dead' state. Jan 23 17:56:48 hub-master-0.workload.bos2.lab systemd[1]: run-netns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-b1c25554\x2d9f5f\x2d47c3\x2db084\x2d778a2e273108.mount has successfully entered the 'dead' state. Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.085306566Z" level=info msg="runSandbox: deleting pod ID 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291 from idIndex" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.085331107Z" level=info msg="runSandbox: removing pod sandbox 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.085343905Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.085355871Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.105447707Z" level=info msg="runSandbox: removing pod sandbox from storage: 5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.109020457Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:48.109038721Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0" id=7c8c1118-cbeb-485d-9551-c7d948144952 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:48.109270 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:48.109315 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:48.109336 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:56:48 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:48.109379 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-qs9w4_openshift-network-diagnostics(0fdadbfc-e471-4e10-97e8-80b8e881aec6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-qs9w4_openshift-network-diagnostics_0fdadbfc-e471-4e10-97e8-80b8e881aec6_0(5c06483da3adc35d757737b666143df7d6fdf45b2563d0146f4cb59a3bfda291): error adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-qs9w4/0fdadbfc-e471-4e10-97e8-80b8e881aec6]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-qs9w4" podUID=0fdadbfc-e471-4e10-97e8-80b8e881aec6 Jan 23 17:56:49 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:49.995945 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:49.996313561Z" level=info msg="Running pod sandbox: openshift-kube-controller-manager/kube-controller-manager-guard-hub-master-0.workload.bos2.lab/POD" id=9c6d0152-1fc0-4012-a993-f8ff291b9c9b name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:49 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:49.996355586Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:50.013350251Z" level=info msg="Got pod network &{Name:kube-controller-manager-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-controller-manager ID:4836f1c70ba8815ebcb362606285968a1eb71d375e64a3888ad4b5700c19b94d UID:2284ac10-60cf-4768-bd24-3ea63b730ce6 NetNS:/var/run/netns/e1c7db6d-280e-436d-9056-2f86e8bb08fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:50.013376062Z" level=info msg="Adding pod openshift-kube-controller-manager_kube-controller-manager-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:50 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:50.996281 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dzwx9" Jan 23 17:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:50.996612035Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-dzwx9/POD" id=a181b9a5-dcfc-4721-be00-01497e92eed0 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:50 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:50.996655059Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:51.007671452Z" level=info msg="Got pod network &{Name:network-metrics-daemon-dzwx9 Namespace:openshift-multus ID:6f6d3469d83a12f6268dadaa6135b3a4217fc2fbfaafc13541d0d4fd153138f0 UID:fc516524-2ee1-45e5-8b33-0266acf098d1 NetNS:/var/run/netns/2691059f-93ba-4fc8-a63f-072f1f4f6330 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:51 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:51.007698285Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-dzwx9 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:54.996939 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:56:54 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:54.997609 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.033716942Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515): error removing pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.033760014Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount has successfully entered the 'dead' state. Jan 23 17:56:55 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount has successfully entered the 'dead' state. Jan 23 17:56:55 hub-master-0.workload.bos2.lab systemd[1]: run-netns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-d614fd19\x2d319c\x2d4bc7\x2d8b74\x2d783336f5e2a0.mount has successfully entered the 'dead' state. Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.079357187Z" level=info msg="runSandbox: deleting pod ID faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515 from idIndex" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.079383944Z" level=info msg="runSandbox: removing pod sandbox faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.079398782Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.079410170Z" level=info msg="runSandbox: unmounting shmPath for sandbox faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.097432113Z" level=info msg="runSandbox: removing pod sandbox from storage: faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.100383671Z" level=info msg="runSandbox: releasing container name: k8s_POD_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:55.100402117Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0" id=37c8816e-008f-4ffd-adb5-72c73e118414 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:55.100599 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:56:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:55.100639 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:55.100674 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:55 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:56:55.100715 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd(16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-guard-hub-master-0.workload.bos2.lab_openshift-etcd_16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b_0(faec9e9b7d7b29a71a95654feabb62b896678582818852991d2c8bb8311b9515): error adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" podUID=16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b Jan 23 17:56:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:56.694868522Z" level=info msg="NetworkStart: stopping network for sandbox 2dfc90ef95cd32a4a574ec95035b6bb8493f123c8ed9390feb6518338fc527f9" id=baa7b8cf-6e58-4c07-865e-dca34292ae62 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:56.695014003Z" level=info msg="Got pod network &{Name:revision-pruner-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:2dfc90ef95cd32a4a574ec95035b6bb8493f123c8ed9390feb6518338fc527f9 UID:4118bc95-e963-4fc7-bb2e-ceda3fe6f298 NetNS:/var/run/netns/de25fa00-e4dd-40b8-b926-67301da68166 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:56.695037137Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:56:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:56.695044167Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:56:56 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:56.695050591Z" level=info msg="Deleting pod openshift-kube-apiserver_revision-pruner-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:58.141984761Z" level=warning msg="Found defunct process with PID 7327 (runc)" Jan 23 17:56:58 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:58.996202 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab" Jan 23 17:56:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:58.996621209Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-8-hub-master-0.workload.bos2.lab/POD" id=f1eeb369-f73c-4faf-9924-8c735299e0af name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:58 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:58.996680458Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:56:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:59.008837282Z" level=info msg="Got pod network &{Name:revision-pruner-8-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:e901630c545162d7d09758074eb4d65081fab341051f88894a99dd85e5fae0f0 UID:bf9abfd8-f6ab-41d0-9984-1c374f00d734 NetNS:/var/run/netns/08b07f0e-7e36-4fe3-82da-bb98f7bfd0c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:56:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:59.008863507Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-8-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:56:59 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:56:59.996586 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab" Jan 23 17:56:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:59.996960693Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/kube-apiserver-guard-hub-master-0.workload.bos2.lab/POD" id=a62e1fea-9ccd-468e-bc9d-3713d296d5d3 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:56:59 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:56:59.997186473Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.007921176Z" level=info msg="Got pod network &{Name:kube-apiserver-guard-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:94b3fde688ba0a95c34dcc6d8bbadfee9b72db36b79e4f57c966b424fafdf4ad UID:16c1efa7-495c-45d5-b9c1-00d078cb4114 NetNS:/var/run/netns/ccaea150-6b30-4579-bc53-a114ab9d55cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.007941464Z" level=info msg="Adding pod openshift-kube-apiserver_kube-apiserver-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.034106404Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b): error removing pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.034142538Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab systemd[1]: run-utsns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-utsns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount has successfully entered the 'dead' state. Jan 23 17:57:00 hub-master-0.workload.bos2.lab systemd[1]: run-ipcns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ipcns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount has successfully entered the 'dead' state. Jan 23 17:57:00 hub-master-0.workload.bos2.lab systemd[1]: run-netns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-545c4126\x2db8f8\x2d4b1f\x2dbd63\x2dc1ae7177f02e.mount has successfully entered the 'dead' state. Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.076315032Z" level=info msg="runSandbox: deleting pod ID 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b from idIndex" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.076345268Z" level=info msg="runSandbox: removing pod sandbox 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.076360554Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.076377790Z" level=info msg="runSandbox: unmounting shmPath for sandbox 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab systemd[1]: run-containers-storage-overlay\x2dcontainers-991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-containers-storage-overlay\x2dcontainers-991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b-userdata-shm.mount has successfully entered the 'dead' state. Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.093469097Z" level=info msg="runSandbox: removing pod sandbox from storage: 991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.096383321Z" level=info msg="runSandbox: releasing container name: k8s_POD_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:00.096404329Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0" id=8b5bd6fb-0d35-4a7f-8812-22f6ab54438d name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:57:00.096636 8631 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Jan 23 17:57:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:57:00.096676 8631 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:57:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:57:00.096698 8631 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:57:00 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:57:00.096748 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver(6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-10-hub-master-0.workload.bos2.lab_openshift-kube-apiserver_6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa_0(991119f4d67d97d338b73e681dd6ed46c9e5cc8f7ec143c2bd7dce0ae5326f4b): error adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" podUID=6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa Jan 23 17:57:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:57:01.996142 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-qs9w4" Jan 23 17:57:01 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:57:01.996233 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab" Jan 23 17:57:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:01.996552790Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-9-hub-master-0.workload.bos2.lab/POD" id=cf92f572-f432-4e45-9a63-c67656ef8256 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:01.996589740Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:57:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:01.996644080Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-qs9w4/POD" id=f1949bc1-35f1-400b-ba46-8d0f5f1f156a name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:01 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:01.996671401Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:57:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:02.017706838Z" level=info msg="Got pod network &{Name:revision-pruner-9-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:c09443debbcdba80f3bd4f4fe2e2d09592b258cf900c54a04f0bcfaef89e6b4d UID:2dd7d41b-a444-4ab3-8a7b-b6aff6fb5d50 NetNS:/var/run/netns/ac92ae3c-4d88-44e3-bd8c-5502831d742a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:02.017735019Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-9-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:57:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:02.018557782Z" level=info msg="Got pod network &{Name:network-check-target-qs9w4 Namespace:openshift-network-diagnostics ID:ac40fe4d7ca2cfed0b1ca667a9cb4c223cd9496c5af229391cf7a217a6d9373b UID:0fdadbfc-e471-4e10-97e8-80b8e881aec6 NetNS:/var/run/netns/6e118a45-8254-42b5-860b-adb654d3dae4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:02 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:02.018578662Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-qs9w4 to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:57:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:57:05.996229 8631 scope.go:115] "RemoveContainer" containerID="890f73fe1f6213114d64e922ee82fa9254be590e4c8a736b8b6b58768f789ea2" Jan 23 17:57:05 hub-master-0.workload.bos2.lab kubenswrapper[8631]: E0123 17:57:05.996742 8631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ovnkube-node pod=ovnkube-node-897lw_openshift-ovn-kubernetes(409cdcf0-1eab-47ad-9389-ad5809e748ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-897lw" podUID=409cdcf0-1eab-47ad-9389-ad5809e748ff Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.720685169Z" level=info msg="NetworkStart: stopping network for sandbox fba9b83270edd990cf7820d45abcb6731cc8d2c24111f9842e5a35e70d5a9d13" id=78a30dce-ade9-4ba3-8cbd-b423ba36156c name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.720881301Z" level=info msg="Got pod network &{Name:installer-11-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:fba9b83270edd990cf7820d45abcb6731cc8d2c24111f9842e5a35e70d5a9d13 UID:bf374316-9255-4614-af0e-15402ae67a30 NetNS:/var/run/netns/a52eedf0-3348-402e-8bc9-c0e27a22f18e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.720904560Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.720911655Z" level=warning msg="falling back to loading from existing plugins on disk" Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.720918264Z" level=info msg="Deleting pod openshift-kube-apiserver_installer-11-hub-master-0.workload.bos2.lab from CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:57:07 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:57:07.996744 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab" Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.997185456Z" level=info msg="Running pod sandbox: openshift-etcd/etcd-guard-hub-master-0.workload.bos2.lab/POD" id=7b6a1e65-6628-4eed-bf86-e61946903d13 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:07 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:07.997233703Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:57:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:08.007961796Z" level=info msg="Got pod network &{Name:etcd-guard-hub-master-0.workload.bos2.lab Namespace:openshift-etcd ID:e50a2e3e17d2b034108a886078fb713d95c1e8b897b6fb59be42d18609920a64 UID:16a4fd86-c6fa-40ea-aa9b-a2f91d9c275b NetNS:/var/run/netns/db98d5aa-38ae-4a6e-b8be-ffd4831054f1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:08 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:08.007985974Z" level=info msg="Adding pod openshift-etcd_etcd-guard-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)" Jan 23 17:57:12 hub-master-0.workload.bos2.lab kubenswrapper[8631]: I0123 17:57:12.995469 8631 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab" Jan 23 17:57:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:12.995805939Z" level=info msg="Running pod sandbox: openshift-kube-apiserver/revision-pruner-10-hub-master-0.workload.bos2.lab/POD" id=d8d24710-26e5-4d3f-af3e-5d262692d558 name=/runtime.v1.RuntimeService/RunPodSandbox Jan 23 17:57:12 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:12.995846365Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Jan 23 17:57:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:13.006636180Z" level=info msg="Got pod network &{Name:revision-pruner-10-hub-master-0.workload.bos2.lab Namespace:openshift-kube-apiserver ID:0d663c4337bad51b55967b3fb63e114a0310396ebb82bd87426e1ae7617971c8 UID:6e7703f8-c0f2-4b5d-bb68-b729d8aa90fa NetNS:/var/run/netns/d124e2d9-32dc-4379-89ab-fe6d003b1572 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Jan 23 17:57:13 hub-master-0.workload.bos2.lab crio[8584]: time="2023-01-23 17:57:13.006656675Z" level=info msg="Adding pod openshift-kube-apiserver_revision-pruner-10-hub-master-0.workload.bos2.lab to CNI network \"multus-cni-network\" (type=multus)"